History News Network - Front Page History News Network - Front Page articles brought to you by History News Network. Sat, 29 Feb 2020 07:16:28 +0000 Sat, 29 Feb 2020 07:16:28 +0000 Zend_Feed_Writer 2 (http://framework.zend.com) https://w.hnn.us/site/feed HNN Introduces Classroom Activity Kits As part of a new initiative to merge the worlds of education and journalism, HNN is introducing a series of Classroom Activity Kits.

 

These kits, all crafted by undergraduate students at George Washington University, are designed to illustrate the relationship between current events and history. Each kit consists of a complete 45-60 minute lesson plan which requires little-to-no preparation on part of the instructor. These lessons are designed for high school and college students, but they can be altered to suit other education levels.

 

During these lessons, students will be use news articles as a tool for understanding a specific history. In doing so, students will engage in critical thinking, group-work, text analysis, research and more. This first batch of activity kits covers the history of U.S. immigration, climate change, sports activism, and the U.S. prison system.

 

Aside from the educational benefits to students, these kits are also a great resource for educators. All you need to do is click one of the download links below and boom! – you have a fully formed unique activity.

 

HNN’s Classroom Activity Kits will be a continued effort to bring news into the classroom.

 

Click on each link below to learn more and download the activity kit. 

 

 

Classroom Activity Kit: The History of Climate Change

What do farmers from the 1950s, anti-smoking campaigns and climate change have in common? Download this Classroom Activity Kit to find out.

 

Classroom Activity Kit: The History of U.S. Immigration

This Classroom Activity Kit teaches students about U.S. immigration history while also highlighting their personal histories.

 

 

Classroom Activity Kit: The History of Sports Activism

Discussing athletes from Jackie Robinson to Colin Kaepernick, this Activity Kit teaches students about the history of political activism in sports.

 

 

Classroom Activity Kit: The History of Private Prisons in the U.S.

Download this Classroom Activity Kit to teach students about the history of the American prison system.

 

 

Classroom Activity Kit: The History of Climate Change and Activism

Download this Classroom Activity Kit to help students understand climate change activism in its historical context.

 

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174366 https://historynewsnetwork.org/article/174366 0
A Farewell From Editor Kyla Sommers and A Welcome to Michan Connor Hello History News Network!

 

I write to inform you of an excting transition at the History News Network. After a wonderful 14 months as HNN's editor-in-chief, I am moving on to work on converting my dissertation ("'I Believe in the City:' The Black Freedom Struggle and The 1968 Civil Disturbances in Washington, D.C.") into a book. 

 

It has been a true pleasure to work with so many authors, receive feedback from HNN readers, and promote the ways historians engage the public. The History News Network is an incredible resource for historians and the public alike. It was an honor to lead this vital forum. 

 

The George Washington University History Department and I have hired an excellent new editor-in-chief with a background in history and editing. Michan Connor earned his PhD in American Studies from the University of Southern California. His research and published works address American urban and suburban history, urban theory, post-World War II history, and race and ethnicity in America. He's looking forward to publishing contributions from historians in all fields and preserving and developing HNN's position as the place where historians address the public and the public learns about history.

 

As was the case with the transition from founder Rick Shenkman to myself, the contact information and submission process for HNN will remain unchanged.

 

Over a year ago, I wrote an article introducing myself to HNN's readership. In that article, I shared my passion for HNN's mission and wrote, "Historians, I believe, are not isolated in the ivory tower; historians craft arguments that seek to answer difficult questions that affect humanity." Today, the number of difficult questions facing humanity is daunting. Which presidential candidate in this year's election will best lead the United States forward? How do we safeguard human rights in a country and world that increasing devalues them? How can we ensure democracy does not fall prey to the rise of authoritarianism across the globe? 

 

Working with so many historians during my time at the History News Network has helped me understand these issues within broader historical contexts. I am more convinced than ever of HNN's founding principle: learning history is essential if the public is to better understand the present and tackle today's problems. I look forward to witnessing how HNN will continue this mission in the future.

 

Thank you for your support and encouragement the past year. 

 

 

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174440 https://historynewsnetwork.org/article/174440 0
Abolishing the Electoral College Requires the Unanimous Consent of Every State in the Union

 

Most Americans are by now aware of Article II of the U.S. Constitution, allocating to each state its respective weight in presidential elections based on the number of representatives to which it is entitled in both the House and the Senate. However, many Americans remain unaware of the last Sentence of Article V which sets forth the only constitutional provision which cannot be abrogated by constitutional amendment unless every state consents. 

 

Without this small but critical electoral advantage guaranteed to smaller states at the Constitutional Convention, the small state representatives made clear not only that they would absolutely refuse to join any proposed union which deprived them of that weight in the Electoral College (based on equal suffrage in the Senate), but that they would also form their own separate nation if not so granted. 

 

Connecticut Representative Gunning Bedford warned his fellow Framers that they would walk out of the Convention if the large states persisted in depriving the small states of their equal representation in any proposed legislature and declared:  “The small states will find some foreign ally of more honor and good faith who will take them by the hand and do them justice.” Fearing that the large states would bully the small states into giving up their equal representation in the legislature, Rhode Island refused to send any delegate at all to the Convention.

 

With the large states demanding “one person, one vote” in any constituted legislature, and already in the early stages of forming their own separate amalgamation complete with tariffs and borders, deadlock loomed, along with any hope of forming a united country. It soon appeared that the vision of a “United States” would die an ignominious death in the womb before even being born. An advisor to King George III gloated that the American colonies were on the verge of collapse, and that the former colonists would soon “openly concert measures for entering into…their former connection to Great Britain.” As Representative Luther Martin later recalled, “We were on the verge of dissolution, scarce held together by a hair.” An exasperated George Washington even despaired of “securing a favorable issue to the proceedings of the Convention and do therefore repent of having had any agency in this business.”

 

On the ‘verge of dissolution’, it was only in the final days of the Convention that the “Grand Compromise” was proposed as a last desperate measure to create a united union. As every school child has learned, this Grand Compromise consisted of three intertwined  components: First, a bifurcated Congress consisting of a House based on “one person, one vote” as demanded by the large states, and a Senate based on “one state, one vote” as demanded by the small states; second, a presidential election process based on the number of representatives each states was entitled to in both the House and the Senate; third, and perhaps most important of all, an absolute guarantee against any future demagogues attempting to use the constitutional amendment process to deprive the small states of their equal suffrage in the Senate upon which their weight in presidential elections is based. 

 

It turns out that the smaller state’s fears of future demagogues using the constitutional amendment process to deprive them of both their equal Senate suffrage and their weight in presidential elections was well founded. Indeed, since the ratification of the Constitution, there have been over 700 attempts to do so, mainly by population concentrations in the large states which eschew the foundations of federalism upon which Constitution is based, and who prefer the Russian and French systems based on a hypothetical “national popular vote.”  All have failed, however, when confronted by the Framers’ Article V guarantee to the small states -- upon which each state’s weight in presidential elections is based.  

 

Thus, when in 1956 a young Senator John F. Kennedy pointed all this out in his famous Senate speech, a Republican attempt to undermine the federalist foundations of the Constitution quickly collapsed. As Kennedy also reminded his less informed colleagues that abolishing the electoral college would “greatly increase the likelihood of a minority president…it would break down the federal system upon which the states entered the union, which provides a system of checks and balances that ensure that no area or group shall obtain too much power…”. 

 

To those fellow Senators who begrudged the small advantage the Constitution provides to the small states in the Electoral College, Kennedy reminded them: “Today we have an electoral system which gives both large and small states certain advantages that offset each other…” Finally, Kennedy quoted to his colleagues Madison’s vision of federalism as enshrined in the constitution: “(T)his government is not completely consolidated, nor is it entirely federal…Who are the parties to it? The people—not as the people comprising one great body, but the people comprising 13 sovereignties…” 

 

It is now well known that Kennedy’s valiant defense has since been taken up by minority groups, especially African Americans, who later played a critical role in preserving the Electoral College during later Republican attempt to “abolish” it to the disadvantage of minorities. As Vernon Jordan, president of the National Urban League later testified:  “Take away the Electoral College and the importance of being black melts away, Blacks, instead of being crucial to victory in major states, simply become 10% of the total electorate, with reduced impact.” 

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174401 https://historynewsnetwork.org/article/174401 0
What’s at stake when newspapers abdicate their duty to endorse candidates? Plenty. There was a time when voters would take a copy of the ballot they clipped out of the newspaper with them to the polls when they voted.  Recommendations of the newspaper editorial staff were often used as a guide to decision making, particularly in down-ballot races where the candidates were less known than the presidential candidates, senate and congressional seats at the top of the ballot. These editorial endorsements prove useful – and often pivotal – when candidates lack name recognition, have limited funds to disseminate a message,  or when voter fatigue sets in during down-ballot choices.

 

Yet those days are fading for a number of reasons.  Printed sample ballots are old school now that online versions allow voters to enter a zip code and produce a unique ballot specific to  precinct and state elections. Early voting condenses the time available to interview all the candidates. Further, angry voters are ill disposed to trust media opinions.  But another trend is more troubling:  the declining practice of newspapers editorial endorsements of candidates.

 

Financially, that choice makes sense. After all, editorial endorsements can be controversial. During the 2016 election newspapers who endorsed Clinton in Republican-dense population areas discovered a loss of revenue due to declining advertising sales and loss of subscriptions, which were directly attributable to disagreements with their editorial stances.  These dips are intensified in our polarized political climate.  Unlike social media or television news stations that cater to particular points of view, newspapers moved away from being party presses to a neutral arbiter of news in the 1920s and have tried to remain there. Though some still view themselves as objective arbiters of news and endorse based on candidate interviews and familiarity with candidates over time, they suffered in 2016 if they violated the party allegiances of readers.  

 

Traditionally, the real influence of editorial endorsements is strongest among moderate voters.   Unlike television news, research shows that those newspapers with perceived biases in editorials make their endorsements less influential. Importantly, when newspaper endorsements are “surprising” and go against tradition, they are most effective.  In 2016, then, we might have expected that endorsements for Hillary Clinton from the top conservative newspapers would have had influence.  Among the top 100 newspapers as determined by readership,  57 endorsed Hillary while only 2 (The Las Vegas Review-Journal and The Florida Times Union) endorsed Trump.  Other leading newspapers declined endorsements or, as in the case of three newspapers, merely recommended “not Trump.”  Somewhat surprising then, in 2016 those that were predicted to be the most influential paid the price with loss of subscriptions, protests, and even death threats.  Among those who broke ranks with traditional endorsements was the Dallas Morning News, which had not endorsed a Democrat since 1940.  Other traditionally conservative newspapers who endorsed Clinton met similar fates including The Arizona Republic, which had not endorsed a Democrat since 1890 and the San Diego Union Tribune, which had not endorsed a Democrat in its 148-year history.

 

Based on this evidence, editorial endorsements should be discontinued, right?  After all, financial sustainability is an imperative in our current political and economic climate.  Wrong.  Now, more than ever, we need thought leaders on editorial boards to step forward.  Why?  Their duty to inform is coming under attack.  Already under financial pressure to survive, newspapers met additional obstacles once President Trump was elected. Not only did he Tweet in support of newspaper subscription cancellations as protests for their editorial choices, he  denied press credentials to reporters who did not support his candidacy. Shutting down access to information harms the body politic and certainly limits the ability of news organizations to be relevant and timely.  That does not excuse journalists from their duty to inform.

 

The purpose of editorials should not be to win hearts and minds; rather it should be to provoke thought.  In an age when it is easier to throw darts than think, editorials challenge us. Especially when they are in disagreement, they establish thoughtful reasoning.  Moreover, they challenge voters to talk about their opinions with others.  That is the heart of democratic decision making:  voters talking with other voters, citizens entering into respectful disagreement.

 

Down-ballot races suffer when newspapers fail to endorse.  These elections are perhaps more important than the flashy top ballot races as they directly affect the day-to-day lives of ordinary citizens.  Except for the seven states that still permit straight-ticket voting (the voter would vote for all Democrats or all Republicans with a single vote), there is a decided decrease in voting as length of the ballot increases.  Even in Presidential election years, candidates may win a local  race with only a few hundred people casting votes.  Critically, these down-ballot contests account for 97% of all elections in the U.S.  

 

Newspapers never cease seeking their niche in the media market.  Perhaps no greater service could be offered than the civics education endorsements offer.  Just as local newspapers that cover football games and piano recitals are carving a foothold, print media has the opportunity to increase civics education by continuing to endorse.  Who sets property tax rates in your community? What is the diversity among law enforcement leaders, prosecutors and judges? How are electoral districts drawn in the statehouses? What qualifies a person to serve in those capacities? 

 

Perhaps there is an argument for dropping editorial endorsements of presidential candidates.  The 97%?  We need you to do your job.

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174423 https://historynewsnetwork.org/article/174423 0
Roundup Top 10!  

Historians Must Contextualize the Election For Voters

by Joanne B. Freeman

Historian Joanne B. Freeman explains why this information is crucial for getting the election right.

 

The rising panic over coronavirus is likely to make containing it harder

by Danielle B. Wetmore

Panics spread misinformation that make crafting sound medical solutions more difficult, while fueling bigotry.

 

 

Why Religion Is the Best Hope Against Trump

by Jon Meacham

Evangelicals may support an amoral president. But faith can still offer hope for liberation and progress.

 

 

Has America Ever Been in Such a Crisis Before? Yes, Three Times

by Heather Cox Richardson

A lot of folks have been asking me lately if America has ever been in such a crisis before and, if so, what people in the past did to save democracy. The answer to the first question is yes, it has, three times.

 

 

Katherine Johnson should also be remembered for desegregating higher education

by Crystal R. Sanders

The mathematician’s experience showed how valuable diversity can be for inspiring scientific progress.

 

 

How “Historic” Are We? Going Off-Script in the Age of Trump

by Andrew Bacevich

Truth to tell, the word historic does get tossed around rather loosely these days. Just about anything that happens at the White House, for example, is deemed historic.

 

 

Where Might Trumpism Take Us?

by Jamelle Bouie

For analogies that show us where the nation might be headed, look close to home.

 

 

Sanders, Bloomberg, and the threat of anti-Semitism

by Jonathan Zimmerman

If history is any judge, the Bloomberg and Sanders candidacies will be catnip to anti-Semites of every creed and color.

 

 

History by Text and Thing

by ShawnaKim Lowey-Ball

For researchers, history is a thing we do. It is an activity, a handling of old books, a building seen from the vantage point of its past.

 

 

 

How to Build a Winning Coalition: What Today's Democrats Can Learn from Pennsylvania's Republicans in 1860

by Daniel W. Crofts

The Republican Party’s rise to power between 1854 and 1860 contains lessons that remain pertinent.

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174446 https://historynewsnetwork.org/article/174446 0
The Day After the South Carolina Debate Steve Hochstadt is a professor of history emeritus at Illinois College, who blogs for HNN and LAProgressive, and writes about Jewish refugees in Shanghai.

 

 

I have been thinking about my years as an op-ed writer in terms of freedom of the press. I had so much freedom to write about what I wanted. But publishing in the newspaper did entail some restrictions, most of which were not important, for example, I could not use bad words.

 

Now that I write for online readers, I realize some newfound freedom from the press. I have no restrictions on length aside from your patience. I can use whatever words I want. Today I take another liberty with my freedom – I send this a day late. Because the Democratic debate is on Tuesday, I want to include it in this column.

 

The last debate in Nevada made a stronger impression on me than most of the others, because of the highly critical comments all around. The metaphor of a circular firing squad could become accurate, if these few Democrats work so hard at tearing the others down that they collectively make all of them unelectable. I had a notion about who had said what and what that mattered, but I wanted to see what really happened by reading the transcript:

 

I think anyone watching the debate would have been struck by the exchange between Elizabeth Warren and Michael Bloomberg in the first minutes. Warren made a statement for the ages: “I’d like to talk about who we’re running against. A billionaire who calls women fat broads and horse faced lesbians. And no, I’m not talking about Donald Trump, I’m talking about Mayor Bloomberg. Democrats are not going to win if we have a nominee who has a history of hiding his tax returns, of harassing women and of supporting racist policies like redlining and stop-and-frisk. Look, I’ll support whoever the Democratic nominee is, but understand this, Democrats take a huge risk if we just substitute one arrogant billionaire for another.”

 

That may be what is remembered, but I think Bloomberg said something even more important about Bernie Sanders first: “I don’t think there’s any chance of the Senator beating President Trump. You don’t start out by saying, “I’ve got 160 million people, I’m going to take away the insurance plan that they love.” That’s just not a ways that you go and start building the coalition that the Sanders’ camp thinks that they can do. I don’t think there’s any chance whatsoever, and if he goes and is the candidate, we will have Donald Trump for another four years and we can’t stand that.”

 

This was not tit for tat. Bloomberg dismissed Sanders’ candidacy as a failure, and gave no indication he would support it. Bernie is a loser. The people who vote for Sanders now, which so far has been the plurality in 3 primary states, are deluded or worse.

 

Warren was not nice, but she was a making much more reasoned and fact-based prediction about the campaign, if Bloomberg were the nominee. Nevertheless, she said she would support “whoever the Democratic nominee is.” The difference is that Bloomberg did creepy things he has to explain and Bernie is winning Democratic votes.

 

Pete Buttigieg was desperate, I thought, to knock off the front runner Sanders and the dangerous rival for moderate Democrats Bloomberg. So he lumped them together as dangerous: “we’ve got to wake up as a party. We could wake up two weeks from today, the day after super Tuesday, and the only candidates left standing will be Bernie Sanders and Mike Bloomberg, the two most polarizing figures on this stage. And most Americans don’t see where they fit, if they’ve got to choose between a socialist who thinks that capitalism is the root of all evil and a billionaire who thinks that money ought to be the root of all power. Let’s put forward somebody who actually lives and works in a middle-class neighborhood, in an industrial Midwestern city. Let’s put forward somebody who’s actually a Democrat. Look, we shouldn’t have to choose between one candidate who wants to burn this party down and another candidate who wants to buy this party out. We can do better.”

Again the words about Sanders are important: he will burn the party down; he’s a socialist, not a democratic socialist; he is on a par with Bloomberg. The latest polls have Sanders in the lead in New York, Michigan, Wisconsin, Pennsylvania, California, Massachusetts, North Carolina, and Illinois. He is tied with Biden in Texas, and comes second to Biden in South Carolina, and third behind Biden and Bloomberg in Florida.

 

In nearly all these states, Buttigieg is in 4th or 5th place. In Midwestern states, who might look favorably on “somebody who actually lives and works in an industrial Midwestern city,” Buttigieg comes in 5th in Pennsylvania, Michigan, Wisconsin, and Minnesota.

 

The Nevada debate made me uncomfortable. The South Carolina debate was hard for me to watch. The danger that has accompanied the months of campaigning and debates so far, that these varied and talented Democrats would kill each other off, was apparent.

 

The arguments of Klobuchar and Buttigieg, that unity is most important for Democrats and the country, are now a smokescreen for assertions that Bernie Sanders is too divisive, too rigid, too far to the left, which alienates all non-Democrats and threatens 4 more years of Trump and loss of the House. They both expressed the certainty that Bernie represents a danger to Democrats because of his policies.

 

Within the last few days,

the Economist/YouGov national poll puts Buttigieg and Klobuchar 5th and 6th at 9% and 4%;

the Hill/HarrisX poll puts them in 4th and 6th with 12% and 3%;

the CBS News/YouGov poll has them in 5th and 6th at 10% and 5%.

In Minnesota, her home state where she has won victories that she constantly talks about, Klobuchar is ahead of Sanders only 29%-23%. Bernie’s “favorability rating” among Democrats is the highest, while Buttigieg and Klobuchar occupy their usual places outside of the top four.

 

Divisive is the public yelling that the leading Democratic vote-getter is “burning down the party”, by candidates who are rejected by most Democrats. Their most important claim is nonsense: Sanders beat Trump in the most recent poll by CBS/You Gov by a larger margin than anyone else, and other national polls show similar results. He beats Trump by the largest margin in Michigan, Wisconsin, and Pennsylvania, those states at the center of Democratic calculations.

 

The debate last night displayed rudeness, lack of discipline, hostility, and misleading criticisms of fellow Democrats. Buttigieg couldn’t stop making wild predictions about chaos if Sanders is nominated: “I mean, look, if you think the last four years has been chaotic, divisive, toxic, exhausting, imagine spending the better part of 2020 with Bernie Sanders vs. Donald Trump. Think about what that will be like for this country.”

 

The nastiest lightly veiled implication by Buttigieg and Klobuchar was that Bernie “had shown an inexplicable, suspicious softness toward authoritarian regimes around the world”, as NY times columnist Frank Bruni says. Here is the context. All the moderate candidates position themselves as descendants of Obama. An article in the New York Intelligencer shows that Obama praised the Cuban educational system, that every President has noted actual good achievements in authoritarian nations, and that some, including Trump, have praised the worst dictators in general terms. The article concludes that examining “the senator’s actual governing record on civil liberties and political freedom” brings him again to the fore.

 

Perhaps I’ll discuss Joe Biden and Elizabeth Warren later. Tom Steyer has no chance, but he is the most modest and least hostile of the candidates. Bloomberg makes it clear that he thinks Bernie is a sure loser to Trump, that he will be a disaster for the Democratic Party. He needs to prove that he is really a Democrat, and he hates Bernie’s progressive policies. He has never made the call for unity among Democrats a signature position, except unity behind himself. So it’s not surprising that he attacks the obvious front-runner.

 

Buttigieg and Klobuchar are desperate. They have been campaigning hard for nearly a year and are mired at the bottom of those left in the race. They had virtually no appeal for black and Latino voters in December or in January. Polls in the many states that vote on Super Tuesday, which will probably decide the overall delegate winner, show Buttigieg and Klobuchar outside of the top 2 everywhere, outside of the top 4 in most states, except for Klobuchar’s lead in her home state.

 

Claims that Bernie Sanders is a bad Democrat and a sure loser disdains the Democratic voting public, which puts him in the lead everywhere. These white candidates with only white support are telling voters of color that they don’t know what they are doing. I don’t think it is possible that Klobuchar and Buttigieg would rather have 4 more years of Trump than see Bernie leading the country. But their dishonest attacks on a likely nominee may make the difference in a close race.

 

Steve Hochstadt

Jacksonville

February 26, 2020

 

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/blog/154319 https://historynewsnetwork.org/blog/154319 0
What do Donald Trump, Elizabeth Warren, Michael Bloomberg, and Bernie Sanders have in common? Ronald L. Feinman is the author of “Assassinations, Threats, and the American Presidency: From Andrew Jackson to Barack Obama” (Rowman Littlefield Publishers, 2015).  A paperback edition is now available.

 

What do Donald Trump, Elizabeth Warren, Michael Bloomberg, and Bernie Sanders have in common? All have switched parties at some point during their lives. There are numerous other examples of famous politicians who changed political parties. 

 

Perhaps the three best known include two who ran as “Progressives" in the early part of the 20th century and set a standard for third-party reform candidates, and a controversial segregationist in 1968, who in many ways foreshadowed Donald Trump’s campaign.

 

Theodore Roosevelt, former Republican President, ran in 1912 as the Progressive (Bull Moose) Party nominee, and won 27.5 percent of the popular vote, 4.1 million votes, 88 electoral votes, and six states. This was the best all time performance by a third party nominee.

 

Republican Senator Robert M. LaFollette, Sr. of Wisconsin ran for President in 1924 as the Progressive Party nominee, similar to the Progressive Party of 1912. He won 16.6 percent of the popular vote and the 13 electoral votes from his home state. 4.8 million citizens voted for him, and Franklin D. Roosevelt later gave LaFollette credit as a forerunner of the next decade’s New Deal programs.  

Democratic Governor George C. Wallace of Alabama, a infamous segregationist, ran in 1968 as the American Independent Party nominee, and won 13.5 percent of the popular vote, 9.9 million votes, 46 electoral votes and five states, the second best performance in electoral votes and states behind Theodore Roosevelt.

 

Beyond these three well known cases, there are 14 others worthy of attention.

 

Herbert Hoover worked in the Woodrow Wilson administration and was at Versailles with the President in 1919.  He was seen as a potential Democratic Presidential contender in 1920, and was even endorsed by Assistant Secretary of the Navy Franklin D. Roosevelt.  However, he served as Secretary of Commerce under Republican Presidents Warren G. Harding and Calvin Coolidge, before being the Republican Presidential nominee in 1928, serving one term as President, then losing to FDR in 1932.  The old friendship was gone; Hoover became a vehement critic of FDR in both domestic and foreign policy, and was never invited to the White House by his successor during the more than twelve years of Roosevelt’s time in the Oval Office.

 

When FDR ran for his third term in 1940, he chose Secretary of Agriculture Henry A. Wallace as his Vice President. Wallace was a former Republican, who converted to the Democratic Party when he served in the Roosevelt cabinet. Later, Wallace would run as a third party nominee of the Progressive Party in 1948 against President Harry Truman, but having far less impact than earlier “Progressives”, Theodore Roosevelt and Robert M. LaFollette, Sr., as Wallace gained no electoral votes, and only won 2.4 percent of the popular vote.

 

Also in 1940, FDR’s Republican opponent was a former Democrat, businessman Wendell Willkie, who was critical of the spending and federal intervention of the New Deal programs, and while he performed better than FDR opponents Herbert Hoover and Alf Landon in previous campaigns in 1932 and 1936, he still was unable to triumph over FDR in his third term bid.

 

In 1947-1948, when Truman’s public opinion ratings were at a low point, Truman proposed that World War II General and D-Day national hero Dwight D. Eisenhower should consider running for President as a Democrat with Truman as his Vice President. Eisenhower, then a publicly non partisan figure, chose not to take up the unprecedented offer. In 1952, Eisenhower abandoned his political neutrality, ran for president as a Republican, and was Truman’s successor. 

 

South Carolina Democratic Governor Strom Thurmond opposed Truman in 1948, running as a segregationist candidate with the States Rights Party. Thurmond won four states and 39 electoral votes, at the time the second best third party performance, but later surpassed by George C. Wallace in 1968.  In 1964, Thurmond, then a US Senator, switched to the Republican Party in support of Republican Presidential nominee Barry Goldwater. In the following years, many Southern Democrats switched to the Republican Party.

 

In 1980, John Anderson of Illinois, the third ranking Republican in the House of Representatives and Chairman of the House Republican Conference,  announced his retirement. He then ran for President as an Independent against Republican nominee Ronald Reagan and Democratic President Jimmy Carter. Anderson won no states, but did win nearly seven percent of the vote, attracting primarily liberal Republicans, some independents, and some disgruntled Democrats including historian Arthur Schlesinger, Jr. and former First Lady Jacqueline Kennedy Onassis.  He also was able to have one debate with Ronald Reagan, but President Carter refused to participate in a similar debate with Anderson.

 

His Republican opponent, Ronald Reagan, was a Democrat while he was an actor in Hollywood. Reagan supported FDR and Truman, but switched to the Republican Party due to the influence of his wife, Nancy Davis, and her father.  He became nationally recognized as a political figure after his speech supporting Barry Goldwater in 1964. 

 

In 2004, former Vermont Governor Howard Dean seemed the front-runner in the early Democratic Primary season before his fall from grace.  After finishing third in the Iowa Presidential caucuses, he became infamous for a screaming declaration that he would succeed in future primaries and caucuses, ironically leading to his rapid decline. Dean came from a wealthy Republican family and was a Republican as a young man. He switched to the Democratic Party while at Yale University.

 

Hillary Rodham Clinton was a Republican while in high school in Illinois and during her early years in college. She supported Barry Goldwater in 1964 due to the influence of a high school history teacher, but she converted to the Democratic Party while a student at Wellesley College in Massachusetts. After first serving as First Lady under her husband, Bill Clinton, she served eight years in the US Senate, lost the Presidential nomination in 2008, and then served as Secretary of State, and became the Democratic nominee in 2016, losing the electoral college but winning the national popular vote by 2.85 million votes over Donald Trump.

 

Donald Trump also switched parties a number of times. He started as a Democrat, switching  in 1987 to the Republican Party, then becoming a member of the Reform Party in 1999, back to the Democratic Party in 2001, and then back to the Republicans in 2009.  Along the way, he contributed to many Democratic and Republican politicians, and flirted with running for President in 1988 and 2000, but was not taken seriously until 2015, when he announced his campaign for President.

 

Ironically, his Vice President, Mike Pence, started off as a Democrat and voted for Jimmy Carter in 1980. Pence was inspired by fellow Catholic John F. Kennedy. In college, he became an evangelical Christian and a supporter of Ronald Reagan.

 

The 2020 primary features two Democratic contenders who have notably changed their affiliation. Elizabeth Warren was very conservative and a registered member of the Republican Party from 1991-1996. While teaching law at the University of Pennsylvania Law School as a tenured professor, her colleagues described her as a “die hard” conservative and a believer in laissez faire economics. After 1996, her views changed and she joined the Democratic Party.  

Michael Bloomberg was a lifelong Democrat until he switched registration to the Republican Party to run for New York City Mayor in 2001. Then, he became an Independent in the middle of his second mayoral term, and ran as an Independent for his third term in 2009. He remained an independent until  2018, when he again became a Democrat and announced for his candidacy for President in November 2019. Bloomberg had considered a Presidential run in 2012 and 2016, but passed on both possibilities until finally becoming a major factor in the present 2020 campaign.

 

And the ultimate Independent, Bernie Sanders, was never a Democrat until he decided to run for President in 2016, having the longest career of any Independent in Congress in both chambers in American history.  Sanders switched back to Independent status in 2017, and again became a Democrat when he decided to run for President again in 2019.  While avoiding party identification throughout his career, except recently, he always caucused with the Democratic Party and voted with the caucus most of the time over his long career in Congress since 1991.

 

So party loyalties have changed in these notable 14 instances in the past one hundred years, beyond the better known cases of Theodore Roosevelt, Robert M. LaFollette, Sr., and George C. Wallace.

 

Party changes can reflect the will of individual candidates. Sometimes a person’s world view changes, or they sense a political opportunity by changing parties. In other cases, party switching might indicate, in hindsight, a deeper change in the party system. Though a third party candidate has never won the Presidency, they have influenced outcomes and often pushed ideas into the mainstream of one of the major parties (like with the incorporation of aspects of Progressivism into the New Deal or of Thurmond and Wallace’s racial conservatism to the Republican Party).

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/blog/154318 https://historynewsnetwork.org/blog/154318 0
Five rejected designs for the US Capitol building

 

Construction of the US Capitol we know and love was completed in 1800, following a competition to find a home for Congress. The contest had been won by a physician with pretensions to architecture, William Thornton, who only had his shot at the prize – after the deadline had passed - thanks to George Washington’s dissatisfaction with earlier entries. 

 

“If none more elegant than these should appear,” the President had declared of Thornton’s rivals, “the exhibition of architecture will be a very dull one indeed.”

 

Even Thornton’s design required some tweaking before it got the green light. But this story shouldn’t lead you to believe that potential alternatives for the Capitol were of zero architectural worth. It is simply that Washington and his colleagues had precise tastes and requirements that had not been met.

 

With this in mind, a band of contemporary researchers set about unearthing the rejected proposals for the Capitol building and resurrecting them in a series of new mock-ups to show how it might have turned out. They turned to Jeanne Folley Butler’s indispensable Competition 1792: Designing A Nation’s Capitol– a special issue of Capitol Studies, the United States Capitol Historical Society's biannual journal. Without the resources to ‘do the honors’ for every failed entrant, the team selected the designs they deemed to be of the greatest architectural or aesthetic interest.

 

How would you feel with one of these structures in place of the familiar dome on First Street?

 

William Thornton’s rejected US Capitol design

 

 

Thornton was a doctor and polymath who had achieved some fame for his writings and ideas about astronomy, abolition, and arms. When he arrived in Washington, he carried designs for a Georgian mansion complete with six-columned Portico, and featuring a much smaller dome than the one we see today. After handing his initial Capitol design over, he heard talk of what the President and Thomas Jefferson were really looking for – something a bit bigger, with a substantial dome and a presidential apartment. He amended his design and returned with the successful drawings. 

 

Stephen Hallet’s US Capitol design

 

 

The Frenchman Étienne “Stephen” Sulpice Hallet’s story is a little heartbreaking. His neoclassical design was favored by Washington, and it was labelled the “fancy piece” among those in the know (although this may have said just as much about the plainness of the other submissions). Hallet was even invited to refine and resubmit his design five times over, albeit to remedy his vastly over-budget vision. The President noted that the Frenchman’s feelings would need to be “saved and soothed” when he was finally pushed aside for Thornton. So he was made superintendent of construction – only to be cast aside again after pushing to integrate his own ideas into Thornton’s Capitol.

 

Andrew Mayfield Carshore’s US Capitol design

 

 

The former soldier and teacher Andrew Carshore was no architect, but he was brave enough to submit his designs drawn in perspective. However, it looked rather scruffy due to the Brit’s amateurishness, and the building was rejected for lacking the sparkle that Washington sought for the Congress building. The central pedimented pavilion and portico may be its more notable details.

 

James Diamond’s US Capitol design

 

 

James Diamond was an Irish architect, so his submission was both professional and ‘exotic’ – inspired by the stately homes of Dublin and England. Diamond’s design is characterized by Venetian windows over the arched door, rows of windows set in arches, pediments, and an arcade at ground level.

 

Phillip Hart’s US Capitol design

 

 

Mysterious Phillip Hart submitted a Renaissance-inspired structure that may not have stood up academically or, uh, literally, had it been built. The well-meaning amateur (one presumes) has included lots of fine details and some baffling ones (you’d have to stoop to make your way around that top floor). The ramparts are lined with a unique feature: 12 statues of children apparently modelled after the Twelve Labors of Hercules.

 

All in all, then, the Capitol we ended up with is not a bad job. But it is always interesting to be reminded how different our most familiar landmarks might have been but for the taste of our ancestors and the peculiarities of political circumstance. Meanwhile,the real Capitolawaits your visit!

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174437 https://historynewsnetwork.org/article/174437 0
Commemorating the 150th Anniversary of the 15th Amendment

 

February 2020 is the 150th anniversary of the 15th Amendment to the United States Constitution declaring “The right of citizens of the United States to vote shall not be denied or abridged by the United States or by any State on account of race, color, or previous condition of servitude.” The amendment also empowered Congress to enforce the mandate by “appropriate legislation.” In his latest book, The Second Founding: How the Civil War and reconstruction Remade the Constitution (Norton, 2019), Eric Foner argues that the 15th Amendment was the third and final of the post-Civil War Reconstruction amendments radically redefining the United States, ending slavery, defining the rights of citizens, establishing federal supremacy, requiring state governments to abide by national laws and norms, and prohibiting states from preventing newly freed African American men from voting because of “race, color, or previous condition of servitude.” Much of this essay draws from Foner’s book.

 

The 15th amendment passed Congress on February 26, 1869, was ratified by the required number of states on February 3, 1870, and was certified as approved on March 30, 1870. Passage of the amendment was cheered by African Americans and abolitionists and President Ulysses S. Grant called it “a measure of grander importance than any other one act of the kind from the foundation of our free government to the present day.” 

 

The 15th amendment had a contentious approval process and since then has had a rocky history. Republicans largely supported the measure for political reasons, assuming black male voters would back their party and keep it in power. The House of Representatives voted 144 to 44 in favor of the Amendment, with 35 abstentions and no member of the Democratic Party in support. It passed in the Senate by 39 to 13, with 14 abstentions, and zero Democratic votes. Since approval in each branch of the legislature required a 2/3 vote, the amendment only passed because abstentions did not count as negative votes.

 

Opposition to the amendment came from both the “right” and the “left.” Many white Americans who accepted the legitimacy of an end to slavery, did not accept the idea of granting political equality to newly freed blacks. Even if the freedman were recognized as citizens by the 14th Amendment, citizenship did not automatically include the franchise. Adult women still could not vote. More radical Republicans wanted a wide-ranging ban on suffrage limitations, but a broader amendment was rejected. Abolitionist Senator Charles Sumner from Massachusetts abstained from voting on the final text of the 15th Amendment because of this. Debate over the amendment caused a split between many white female suffragists and the former abolitionist movement. While Lucy Stone and the American Woman Suffrage Association did support the amendment, Susan B. Anthony and Elizabeth Cady Stanton thought granting the right to vote to black men but not to black and white women was a betrayal of women’s rights. 

 

On March 1, 1869, Nevada became the first state to endorse the 15th Amendment. Former Confederate states that still had Reconstruction governments approved it. New York ratified the amendment on April 14, 1869, tried to revoke ratification in January 1870 when power shifted in the state legislature, and then re-ratified in in March after it had already been approved by the required 3/4th of the states. Tennessee did not approve the amendment until 1997.

 

New Jersey initially voted against the amendment and did not approve it until February 15, 1871, almost a year after it went into affect. However, the first African American man to vote under provision of the 15th Amendment was a New Jerseyan, Thomas Mundy Peterson, in Perth Amboy, on March 31, 1870. 

 

Approval of the Fifteenth Amendment set off a wave of celebration in African American communities. On April 9, 1870, under the headline “Free and Equal,” the New York Times reported on “Our Colored Citizens’ Jubilee - Celebration in Honor of the Fifteenth Amendment – Imposing Procession – Demonstration at Cooper Institute.” According to the article:

 

The colored population of the City turned out en masse yesterday afternoon in order to celebrate the ratification of the Fifteen Amendment. The line of march of the procession has already been published, and the various streets designated were lined with the colored people of both sexes. The only changes made in the procession were in substituting Thirty-second for Thirty-fourth-street and in having the speech-making at Union-square instead of in front of the Cooper Institute. No interruptions except in the nature of ill-mannered criticisms occurred to mar the good order characterizing the whole celebration, and the affair reflected great credit on those most immediately interested. A noticeable and praiseworthy tribute of respect was paid to the ‘Father of his country,’ as each and all of those in the line reverently raised their bats as they passed the bronze equestrian statue at Union-square. About 5,000 men were in the line.

 

The keynote speaker at Union Square in Manhattan was the Reverend Henry Highland Garnet, a leading black abolitionist, who compared this celebration with the horrors of the 1863 draft riots when black men were lynched in the streets of New York. Garnet told the crowd, “I have labored for over a third of a century in the cause of freedom. And my heart swells with joy as I look on this assembly and say ‘Fellow-Citizens.’ Many had lost faith that this day would ever come. In this consummation of all their labors and trials they see how rapid has been the progress of the American people in the cause of freedom.” Garnet also later spoke at the Cooper Institute rally.

 

On April 22, 1870, Frederick Douglass addressed the significance of the Fifteenth Amendment at a celebration in Albany, New York. A triumphant Douglass proclaimed that withthe suffrage amendment “The black man is free, the black man is a citizen, the black man is enfranchised, and this is by the organic law of the land. No more a slave, no more a fugitive slave, no more a despised and hated creature, but a man, and, what is more, a man among men . . . The curtain is now lifted. The dismal death-cloud of slavery has passed away. Today we are free American citizens. We have ourselves, we have a country, and we have a future in common with other men.”

 

Unfortunately, soon after it went into affect, the Supreme Court began to limit the amendments scope. In United States v. Reese (1876) the Court declared the Enforcement Act of 1870 unconstitutional and ruled that the Fifteen Amendment did not grant an absolute right to vote. The Court approved supposedly “race-neutral” state laws that severely restricted Black suffrage such as poll taxes and literacy tests with exception if a grandfather had been a registered voter. It was not until passage of the Voting Rights Act of 1965, almost ninety years later, that the federal government took steps to protect the voting rights of African Americans encoded in the Fifteenth Amendment. However, in 2013 a crucial provision of the Voting Rights Act, that states with a history of voter restriction based on race receive prior approval from the federal government before changing voting rules, was thrown out by a Supreme Court dominated by Republican appointed justices. This decision unleashed a wave of voter suppression by Republican dominated state governments.

 

Teaching material on the 15th Amendment is available from the Zinn Education Project.

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174386 https://historynewsnetwork.org/article/174386 0
Bernie Sanders, Social Democracy, and Democratic Socialism

Bernie Sanders is the best kind of social democrat and sort of a democratic socialist. He roars against economic corruption, severe inequality, and donor class politics, protesting that Wall Street Democrats dominate the Democratic Party. He espouses a welfare state politics of economic rights, which he calls, with a modicum of historical warrant, democratic socialism. Sanders wants the U.S. to institute policies that social democratic governments achieved long ago in Denmark, Finland, Norway, Sweden, and Germany, and he opposes authoritarian forms of socialism, contrary to an incoming avalanche of red-baiting. He resolved forty years ago to wear the socialist label as a badge of honor since people were going to call him one anyway. This decision served him well until now. Now he is obliged, with much at stake, to explain more than ever what the s-word does not mean to him.

 

I say this as a longtime supporter of Sanders and as a current supporter of Elizabeth Warren who believes that she is the best candidate to unite the Democratic Party and attract various others to it. Taking ethical responsibility for the likeliest electoral outcome of November 2020 is a very serious business. But my subject is the democratic socialism of Sanders, which looms ever larger in American politics after the Iowa and New Hampshire primaries.

 

Classic democratic socialism calls for centralized public ownership of essential enterprises, or worker ownership, or mixed forms of public and worker ownership, either decentralized or not. But Sanders has never pushed for any of these things. The closest that he comes to classic democratic socialism is his plank calling for worker control of up to 45 percent of board seats and 20 percent of shares. Similar planks in European platforms have long marked the boundary between modern social democracy and democratic socialism. In Germany, all companies employing more than 1,000 workers have been required since 1951 to institute a supervisory board consisting of 50 percent worker representatives and 50 percent management representatives. German codetermination has been a firewall against runaway manufacturing plants, since workers do not sabotage their own jobs. The Swedish version of codetermination, a union capital fund called the Meidner Plan for Economic Democracy, would have crossed the line eventually from social democracy into democratic socialism. But it was scuttled in 1992 after a 10-year run, confirming that even in Sweden the guardians of neoliberal capitalism do not tolerate transitions to a different kind of system. 

 

Social democracy and democratic socialism have never completely overlapped. Originally, “social democracy” named all socialists that rejected anarchism and ran for political office, while “democratic socialism” named the flank of social democrats that insisted on the liberal democratic road to socialism. Democratic socialists contended against their Marxian comrades that democracy is the indispensable road to socialism, not a byproduct of achieving it. Social democracy was a name for the broad socialist movement and democratic socialism was a liberal-democratic flank of it. 

 

But “social democracy” acquired a different meaning after European socialists compiled records in electoral politics and shed much of their Marxian background. Their prolonged struggle against Communism was formative and defining, as was their swing away from collective ownership after World War II. “Social democracy” came to signify what democratic socialists actually did when they ran for office and gained power. They did not achieve democratic socialism, the radical idea of social and economic democracy. They added socialist programs to the existing system, building welfare states undergirded by mixed economies.

 

“Social democracy” became synonymous with European welfare states in which the government pays for everyone’s healthcare, higher education is free, elections are publicly financed, and solidarity wage policies restrain economic inequality. In the United States, meanwhile, healthcare depends on what you can afford, many have no health coverage at all, students enter the workforce with crippling debt, private money dominates the political system, and severe inequality worsens constantly. Sanders has long challenged the U.S. to aspire to social democratic standards of social decency. In his early career he called it “socialism” because he was a stubborn type who didn’t flinch at red-baiting and the social democratic parties no longer talked about economic collectivism. Then he found himself speaking to a generation that grew up under neoliberalism and does not remember the Cold War.

 

Occupy Wall Street, in 2011, was a harbinger that people are fed up and a breaking point had been reached. Forty years of letting Wall Street and the big corporations do whatever they want yielded protests against flat wages, extreme inequality, the specter of eco-apocalypse, and the neoliberal order. The way that Sanders describes democratic socialism is prosaic in comparison to its history and in the context of the global rebellion to which he speaks. He conceives democratic socialism as the fitting name for his belief that a living wage, universal healthcare, a complete education, affordable housing, a clean environment, and a secure retirement are economic rights.

 

These six economic rights come from Franklin Roosevelt’s 1944 Economic Bill of Rights. The much-dreaded “radicalism” of Sanders is in fact a throwback to FDR’s State of the Union Address of 1944. Sanders lines up with FDR, Martin Luther King Jr., and Catholic social teaching in believing that real freedom includes economic security. In November 2015 at Georgetown, he reeled off 32 paragraphs about what democratic socialism means to him. All were FDR themes in social democratic language: “Democratic socialism means that we must create an economy that works for all, not just the very wealthy. Democratic socialism means that we must reform a political system in America today which is not only grossly unfair, but, in many respects, corrupt.”

 

Today he is building upon the spectacular campaign he ran in 2016, rewarded for decades of never wavering on behalf of equality. Last time it showed that Sanders didn’t know many black or Latino organizers and had spent his career speaking mainly to white Vermont audiences. This time his campaign feels much less white and Old Left. The Sanders campaign is a social movement, as he says, not a conventional campaign. Sanders is a magnet for many progressives who hold no interest in joining the Democratic Party and who trust that he is no more of a Democrat than they are. My qualm about his candidacy is that he will not be able to unify the party if he is nominated; Sanders has been a terribly solitary figure in the Senate for a long time. But the party bosses must let this play out without cheating him as they did last time. And they really must resist red-baiting the person who may emerge as their nominee.

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174385 https://historynewsnetwork.org/article/174385 0
New, Experimental West Side Story Is an Experiment that Goes Awry

The new version of West Side Storythat opened in New York on Thursday is not your grandfather’s West Side Story. 

 

No finger snapping by the two street gangs involved in the fight for the streets of Manhattan’s West Side. No fire escape romances between Tony and Maria, the star crossed lovers. No recognizable scenes in a sprawling “America” song, the one that made Chita Rivera a superstar. No sense of the warfare between the Jets and the Sharks other than a lot of badmouthing and intense staring. No mentoring and friendship by Anita for young Maria, newly arrived from Puerto Rico. Anita is an urban barracuda. You’re far more afraid of her than both street gangs put together. Worst of all, no “I Feel Pretty,” the classic song that the director cut from the play. Yes, he cut “I Feel Pretty.”

 

All of those changes pale compared to the biggest change in the musical play, one of the best in theater history. The director has filmed the play and runs the film on a mammoth screen behind the actors while they play the scene. You see these tiny actors performing the real play and behind and from the stage floor way up to the heavens is this huge screen as big as the back theater wall, so high that airplanes fly past it. It is discombobulating. The movie runs through most of the show and dwarfs it You go to the theater to see a play, and you get a nearly two-hour movie. If I wanted t see the West Side Story movie, I’d turn on television and catch it – for free.

 

That is the ne colossal problem with the new West Side Story, that opened last week at New York’s Broadway Theater. The movie simply overwhelms the play and you watch the movie, not the play. The movie is the same as the play in front of it in some parts and entirely different in others. It has close up of the actors faces that are as big as the Empire State building and they scare you. The movie intrudes on the play, too. There is always a cameraman in the middle of the actors on stage, filming them for the movie that is running behind them.

 

I guess it is experimental theater, but it looks like a junior college Media 101 workshop staged badly.

 

The play has lost its focus and sense of history, too. The 1957 play, and very memorable 1961 movie told a rough tale about life in New York City in the 1950s, the era of the savage juvenile delinquents and the damage they caused and fear they struck in the heats of all. Two street gangs fight for supremacy while star crossed lovers try, in one night and one day, to find happiness. All goes badly. People get killed.

 

Little of that is in this movie, er, play. The two gangs do rumble and Tony does love Maria, but that’s about it. All of the fabled songs are in there, but there is little engaging story around them.

 

Some of the actors were injured during the long months of rehearsal. The man who plays Tony was out for a month with an injury. Another star was hurt so badly he had to leave the play and never came back. There were disputes between the various choreographers on the staging of the musical numbers. There were glitches here and there, troubles filming scenes three stories above the stage.

 

Director Ivo Van Hove (this is his first Broadway musical) worried that audience might not understand his daring new approach. He is right.

 

The acting, even though a bit exuberant, is fine. Director Hove gets good work from Isaac Powell as Tony, Amar Ramasar as Bernardo, Sheeren Pimentel as Maria, Yesenia Ayala as Anita,  Jacob Guzman as Chino, and Dan Oreskes as a candy store proprietor.

 

Oh, if you loved the music, it is all still enchanting and heart stopping. Maria will still bring tears to your eyes andTonight will get your heart racing. You will still chuckle at Gee, Officer Krupke. You will still shake your head in wonder at the score with music by Leonard Bernstein and lyrics by Stephen Sondheim (Arthur Laurents marvelous book has been altered a bit). Something’s Coming is still wonderful and America, as current today as it was in 1957 with its troubled immigrants theme, is still enchanting, if you close your eyes and thin of Chita Rivera. Can the music save the show, though?

 

This West Side Story needs a lot of work from the east side, north side and south side, too,

 

PRODUCTION: The play is produced by Scott Rudin, others. Video Design: Luke Hall, Costumes:  An D’Huys,  Sound: Tom Gibbons,  Orchestration: Leonard Bernstein,  Sid Ramin, Irwin Kostal, Scenic Design and Lighting: Jan  Versweyveld, Choreography:  Anne Teresa de Keersmaeker ( based on the original work by Jerome Robbins). The play is directed by Ivo Van Hove. It has an open ended run.

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174395 https://historynewsnetwork.org/article/174395 0
Is Bernie the New George McGovern?

 

It was close, too close to call in the problem-filled Iowa caucus, but Bernie Sanders and Pete Buttigieg were in a virtual tie for the lead. A week later, Sanders barely won the New Hampshire primary.  He will have some serious problems when the Democrats vote in South Carolina where Joe Biden is struggling but still strong, but as of now, Bernie Sanders, Independent, Socialist, far-left Bernie Sanders is the Democratic frontrunner. 

 

One can see close parallels between the 1972 presidential contest and that of 2020. Those parallels do not bode well for Bernie Sanders or the Democrats. 

 

In 1972, incumbent Republican President Richard Nixon entered the general election touting detente with the Soviet Union, the opening of doors to China, and the prospect of an end to the war in Vietnam. It was a strong record on which to run. But Nixon, always in fear that he might lose, decided to engage in a smear campaign against the Democratic candidates who were running to capture the nomination and face him in the general election. Nixon’s goal was to so damage the frontrunners that the Democrats would end up nominating a sure-loser in the general election. He thus ordered his campaign team to engage is a series of dirty tricks and illegal acts aimed at wounding the Democratic frontrunners.  First he targeted Maine Senator Ed Muskie, and having wounded him, went down the list of other top Democrats such as Henry M. “Scoop” Jackson, leaving a trail of blood along the way, knocking one Democrat after another out of the way. Finally, the Democrats chose George McGovern, Senator from South Dakota as their nominee, the weakest opponent in the bunch. 

 

McGovern was the most radically leftist of the field of Democrats, and Nixon was confident that he would turn off moderate Democrats, thereby giving Nixon his much hungered-for victory. And indeed that is exactly what happened.  Nixon won in a landslide, with McGovern willing only Massachusetts and the District of Columbia. McGovern couldn’t even win his home state of South Dakota.  The final popular vote was 60.7% for Nixon, 37.5% for McGovern. Nixon got what he wanted, and more. 

 

Jump ahead to 2020, and we can see Donald J. Trump taking a page from Richard Nixon’s playbook. Concerned that his top rival at the time Joe Biden might be a formidable foe, Trump engaged in a dirty trick campaign to damage Biden. The facts of the case are too fresh to bear repeating except to say that Trump pressured a weak nation, Ukraine, to begin a corruption investigation into Biden and his son, while withholding much-needed funds to fight a war against our and their adversary, Russia. The goal was not to uncover corruption by Biden or his son Hunter – virtually all who have studied this say there was nothing illegal going on - but to create the impression that Biden was so corrupt that even a small nation like Ukraine was concerned – and of course, so should we!  For this, President Trump was impeached, and the Republican-controlled Senate saved him from conviction.

 

But did Trump also damage Biden so severely that he virtually eliminated Biden as a viable candidate thereby paving the way for Bernie Sanders to win the Democratic nomination?  Is this not the candidate against whom Trump would most like to run? And as a recent essay in The Atlantic by McKay Coppins reveals, the Trump campaign has plans for a billion-dollar disinformation campaign, to be unleashed against the Democratic nominee during the 2020 race. Could Sanders and the Democrats survive such an all-out offensive? 

 

And the President’s defenders before the Senate – in true Nixon-like fashion – defended Trump with an old Nixonian defense: “When the President does it, that means it is not illegal.”  We had thought that this imperial doctrine had been totally discredited with the downfall of Nixon, but apparently not.  Alan Dershowitz dusted off this old utterly discredited view and pressed the Republican Senators to argue – incorrectly – that the president had to have committed a crime to be impeached, and further, that any public official who “believes that his election is in the public interest” has a right to go beyond the law to promote his election bid.  As Dershowitz said (with a straight face no less!), “if a President did something that he believes will help him get elected, in the public interest, that cannot be the kind of quid pro quo that results in impeachment.” Let us be clear, that places a president or candidate above the law. Surely that is not what the Framers intended, nor is it what common sense dictates. 

 

Have we really come to this? Both Nixon and Trump illegally undermined the democratic process to tip the electoral scales in their favor.  And will President Trump get away with it?  His best hope is that he runs against a weak Democratic opponent; someone too far from the mainstream to pose a great threat to the President’s re-election hopes. Nixon got his preferred campaign rival in George McGovern. Will Trump get to run against his chosen rival Bernie Sanders?  2020 looks eerily familiar to 1972. And, as one might suggest, we’ve seen that movie before, and it does not end well. 

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174394 https://historynewsnetwork.org/article/174394 0
The Seen and the Unseen: Queer Lives 1914-1945

 

To use the supernatural as a metaphor for the invisibility of queer men’s lives is a literary device, but it shouldn’t detract from the underlying historical elements of the story. Glimpses of the hidden queer history of the early twentieth century are finally becoming available through film, literature, and memoirs. As historians dig deeper into the accounts of marginalized communities and those who championed them, the veil of secrecy is pulled aside to give a more complete picture of queer lives during the tumultuous war years.

 

One of the earliest pioneers of gender studies, Dr. Magnus Hirschfeld, founded the Scientific Humanitarian Committee, which was established in 1897 under the motto, “Justice through Science.” (Sengoopta, 448) Hirschfeld’s primary goal was to make the unseen lives of the LGBTQ population seen through the scientific study of gender. In the hopes of educating laypeople about queer lives and the dangers inherent to a closeted existence, he harnessed the new medium of the silent film and the power of storytelling. He co-wrote and acted in a silent film called Anders als die Andern ("Different From the Others") in 1919.

 

Starring the prominent German actor, Conrad Veidt, the movie follows the life of a fictional concert violinist, who is blackmailed when a nefarious character realizes the violinist is in a relationship with another man. The latter part of the film centers around Dr. Hirschfeld’s testimony as he plays himself. As he often did real life, the fictional Hirschfeld testifies as an expert witness; in doing so, he employs photographs and scientific evidence in an innovative tutorial to reach the public.

 

Hirschfeld’s turn in the film is a perfect example of crossing storytelling with science to speak before an audience that otherwise would have never heard his lecture. Despite Hirschfeld’s efforts, sodomy laws remained on the books in Germany. While jurisprudence didn’t erase the existence of the queer community, many personal histories were lost as men were driven to hide behind the veil of heteronormative lives.

 

Exceptions involved the wealthy, but even they were required to adhere to a certain level of discretion. Those that violated societal norms, such as the Spanish novelist Álvaro Retana, were prosecuted.

 

As one of the more seen members of the LGBTQ community, Retana was born of a noble family—his father was the governor of Huesca and Teruel. With money and prestige to cushion him when his adventures ran afoul of the Spanish government’s more conservative laws, Retana lived the flamboyant life of a libertine. A prolific writer, his short novels were mildly pornographic stories of Madrid’s aristocratic queer men interacting with the working classes. In 1920, he spent five months in prison and paid a fine, a relatively light sentence, for being “an immoral writer.” Retana liked to brag that he was “’el primer novelista del mundo que ha ingresado en la cárcel acusado de voluptuoso’ [the first novelist in the world to have been sent to prison accused of being voluptuous].” (Cleminson, p. 244-45)

 

While the Retanas of the world managed to escape relatively unscathed, the Nazis’ march across Germany and France drove others deep into the realms of the unseen. On such survivor was Pierre Seel, a citizen of Strasbourg, who lacked Retana’s money and prestige.

 

Seel stands out because his family was middle-class and without the means to intercede when the Gestapo arrested him in 1941. According to Seel’s memoir, he survived interrogations and subsequent internment at the Schirmeck (Natzweiler-Struthof) concentration camp only to keep his experience a secret for over forty years in order to hide his homosexuality. 

 

Unfortunately, other members of the queer community who survived the camps remain unseen primarily because homosexuality either was against the law, or considered an aberration of character in their home countries. Many died with their stories untold, their numbers and names forgotten.

 

Through these glimpses of queer lives, authors of historical fiction are left with a partial picture. Records generally show us the experiences of either the most privileged members of a society, who often escaped prosecution unharmed, or the most underprivileged, who suffered from laws designed to criminalize their behavior. There is rarely any middle ground and, except for a few rare cases, lives unseen slip through cracks.

 

Utilizing the intersection of history and fiction, authors have the ability to reimagine these missing histories and shape them for a variety of audiences, much as Hirschfeld did with Anders als die Andern. Metaphors, such as the supernatural or hidden societies, enables authors to tell historically accurate stories without perpetuating the myth of the fallen queer man, who suffers nothing but a life of tragedy. Although like Dr. Hirschfeld, authors might fail to connect with a broader audience, employing fictional narratives can show members of the queer community that their lives and their histories matter.

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174392 https://historynewsnetwork.org/article/174392 0
He said, She said: In the Matter of Nat Turner

 

Who was Nat Turner? We know precious little about him. We know that he was a Virginia slave and that in August 1831 he led a rebellion that resulted in the death of fifty-five whites, more than would die in any violent encounter between whites and blacks prior to the Civil War. For some that means Turner was a hero, a black life that mattered and could not be ignored, that forced itself into the lives of oppressors with dire consequences for all those who would hold children and women and men as captives in unending toil. For others it means Turner was a villain, a “fanatic” who plotted “woe and death.” How are we to decide who Nat Turner was? I suggest we start by listening to him.

 

Through the scribbles of his “confessor,” the white Virginia attorney Thomas Ruffin Gray – down on his luck, hoping to cash in on Turner’s notoriety – Turner left us an invaluable account of himself that Gray published in his famous pamphlet The Confessions of Nat Turner, shortly after Turner was hanged. But the deeply Christian, evangelical figure who appears in that account, moved by an extraordinary faith, is one that many observers have pushed aside, dismissed as a “dangerous religious lunatic” obsessed by “deranged visions.” Fifty years ago, the late Eugene Genovese, noted historian of slavery, hailed Nat Turner as a man who deserved “an honored place in our history” because he had “led a slave revolt under extremely difficult conditions.” This was the Turner that Nate Parker recently celebrated in “Birth of a Nation” (2016). But the same Genovese derided Turner as a single-minded religious fanatic, a “madman who had no idea of where he was leading his men or what they would do when they got there.” Like Genovese and Parker, the novelist William Styron wanted to encourage Americans to reflect on the violence and trauma of slavery, and so in hisConfessions of Nat Turner (1967) he tried to imagine a Nat Turner that he thought white Americans would be able to understand and commiserate. But to do so Styron discarded the “religious maniac,” the “demented ogre beset by bloody visions” that he believed he had encountered in the archive of Turner’s Rebellion. In his place he invented a different person, someone actuated not by faith but by what Styron took to be “subtler motives” shaped by his “social and behavioral roots.”

 

It seems that those we seek to mobilize, whose stories we seek to tell, must first be made to conform to the story we desire to tell of them. Genovese wanted Nat Turner to look like Gabriel Prosser or Denmark Vesey – a stern, disciplined revolutionary. So did the great black writer Arna Bontemps, author of Black Thunder(1936), a novel about Gabriel’s Rebellion, and Drums at Dusk (1939), about the Haitian Revolution. Bontemps had considered writing about Turner, but had been turned off by his “‘visions’ and ‘dreams,’” his “trance-like mumbo-jumbo.” As for William Styron, he wanted Turner to be his means to, as he put it, “know the Negro.” But in order to know this Negro, Styron had to transform him into someone he wanted to know. “I didn’t want to write about a psychopathic monster.” All had their own reasons for not listening to Nat Turner. And so, all the while, Turner looks on, helpless and silent.

 

In this country, it is the especial burden of people of color not to be heard, even when speaking plainly. Whether on the street, or in the classroom, or in the legislature, or on the stump, “speaking while black” invites an absence of reception, of listening, of comprehension.

 

Nat Turner spoke while black. When pressed to explain what the whites called his “insurrection” he had resort not to a language of revenge, or revolution, or self-expiation, or guilt, but to an eschatological cosmology of revelation and judgment. He spoke of the deep marrow of his religious ideation, the evangelical Christian faith that drove his apocalyptic eschatology all the way from the dawn of discipleship, to visions of the crucifixion and Parousia, and on to the Last Judgment. This was his explanation of what he called “that enthusiasm, which has terminated so fatally to many, both white and black.” But our stories of Turner have secularized him. They have turned him into something easier to recognize: a rebellious slave. The result is, the Turner who in fact spoke plainly about himself has been rendered mysterious, an enigma: “the most famous, least-known person in American history.”

 

It is, perhaps, human nature to seek out a particular historical personage that we wish to render admirable, or to vilify, a particular politics we wish to celebrate, a particular “knowable Negro” we wish to know. And whether we are scholars, or novelists, or film-makers, no doubt we will find them. But when we find them, particularly white historians like me, we should first listen carefully to what they have to tell us about themselves. 

 

Nat Turner spoke while black and asks us all to listen. If we are to understand him – to understand what sort of man he was, by whom he was charged to act, on whose behalf he acted, and why – we have to begin by listening.

 

And if people like me learn to listen to him perhaps we will learn to listen to others like him, those who also speak while black, those who have so much to tell about that circumstance and in so many languages; those who can tell what can never be known except by listening to them. Then, perhaps, all of us together we can make decisions about wrong and right, about what is justified and what is not, about who is hero and who is villain. But only then.

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174391 https://historynewsnetwork.org/article/174391 0
Who Can be An American?

 

As we approach the 2020 election, we will see a replay of one of the oldest questions in our history: Who can be an American? When Barack Obama won the presidency in 2008, it appeared that this question had finally been settled, ushering in an era of post racial politics symbolized by a black man reaching the highest office in the land. But, in fact, the election of our first African American president reopened old racial wounds that were exploited by Donald Trump in 2016, who ran a divisive and fear drenched campaign that took him to the White House.

 

As I researched both my first and second books, I was amazed at how much this theme runs through American history. There have always been two views of what makes America a nation. One is tied to a traditional racial or ethnic view, ethnonationalism for short. The other is an ideal of America, as expressed in Jefferson’s natural rights section of the Declaration of Independence, which Gunnar Myrdal called the American Creed.

 

Today, the United States is religiously, culturally, and ethnically diverse. Yet we see ourselves as Americans in large part due to a creedal notion of America. In 2018, two scholars at Grinnell College “polled Americans on what they most associate with being a real American.” They found that a “vast majority of respondents identified a set of values as more essential than any particular identity.” As historian Mark Byrnes wrote for the History News Network back in 2016, “The United States is fundamentally an idea, one whose basic tenets were argued in the Declaration of Independence and given practical application in the Constitution.” These ideas revolve around liberty, equality, self-government, and equal justice for all, and have universal appeal. They are not partisan, not tied to any particular political party. Men with such differing political ideologies as Barack Obama and Lindsey Graham share in the creedal notion of America.

 

The universal ideals expressed in the Declaration of Independence and memorialized in the law through the Constitution and the Bill of Rights have not always been achieved, certainly not at the time they were written, and not today as we continue with the struggle to meet them. Nor are they self-actualizing. One way to view American history is as a struggle by individuals and groups to claim their share of these rights, from the abolishment of slavery, to the women’s movement to gain the vote and a share of equal rights, to today’s clash over gay and transgender rights. 

 

Despite the strong appeal of the American Creed, 25 percent of those polled by Grinnell College held nativist views similar to those espoused by Donald Trump. The view that ethnicity and race made the United States one people predominated in the early American republic, as my book The Growth and Collapse of One American Nation explores. John Jay, in Federalist No. 2, made the argument that the United States was one nation at the time of the debate over ratification of the Constitution by appealing to ethnonationalism. He wrote that we are “one united people—a people descended from the same ancestors [the British], speaking the same language [English], professing the same religion [Protestantism]…” No doubt Jay overstated the degree of national unity at the time that the Constitution was being debated, and also the extent to which the original thirteen colonies were solely of British origin. 

 

The ethnonationalist perspective cannot describe the United States today—it was inaccurate even in 1790, when we were already a diverse people. While white Anglo-Saxon Protestant men came to dominate the United States, they were not the only ethnic or racial group present in 1790. Black people, most of them enslaved, were almost 20 percent of the total population of the South in 1790. The middle colonies were quite diverse, made up of German and Dutch settlers.  

Immigration is another hot button issue in the United States, not just today but also in our past. Many of the founders were skeptical about new immigrants that were not of English stock. Benjamin Franklin complained about German immigrants, the “Palatine Boors” who swarmed “into our Settlements . . . herding together” and created a “Colony of Aliens.” “Thomas Jefferson doubted that he shared the same blood as the ‘Scotch’ and worried about immigrants from the wrong parts of Europe coming to the United States,” Francis Fukuyama has written. Nativist movements would rise during periods of large-scale immigration, such as the 1850’s when the Know Nothing Party competed for power.

 

While ethnonationalism has deep roots in the United States, so too does the American Creed. Jay noted how the United States was “attached to the same principles of government.” To Thomas Paine, the country was drawn from “people from different nations, speaking different languages” who were melded together “by the simple operation of constructing governments on the principles of society and the Rights of Man.” Washington saw America as a place that was “open to receive not only the Opulent and respectable Stranger, but the oppressed and persecuted of all Nations and Religions.”

 

We Americans have always struggled with our composition as a nation of immigrants. For the newly arrived and people of color, America has not always been a welcoming place. Black people were brought here as slaves, and native peoples were overrun as the insatiable desire for land led to ever-greater westward expansion. Nonwhites were excluded from being members of the nation during the early Republic. In the Supreme Court’s Dred Scott decision issued in 1857 on the eve of the Civil War, Chief Justice Taney found that no black person, whether free or slave, could be a citizen of the United States. It is often considered by scholars as the worst decision in our history. Fukuyama goes so far as to say that the “American Civil War was, at its root, a fight over American national identity. The Southern states explicitly linked identity to race by excluding nonwhites from citizenship.” The North, which had largely eliminated slavery, was also racist, treating free blacks as second class citizens. The rise of the abolitionists in the 1830s would eventually lead to the formation of the Republican Party in the 1850s, a group committed to stopping the spread of slavery. Abraham Lincoln would become one of the party’s leaders, a man committed to a creedal view of America who believed that the rights enshrined in the Declaration of Independence applied to all people, black and white, native born and immigrant. His election would also lead to a Civil War over the issue of slavery.

 

The sense of being one nation was fragile during the early Republic, and it continues to be so in the early twenty-first century. Too often, we view those we disagree with as enemies, as members of a different nation. That was the challenge our ancestors faced too, and they allowed their differences to devolve into the Civil War. Perhaps that struggle was inevitable, since the overriding moral issue of their time, slavery, had to be eliminated. Today, we continue to face the problems of our heritage, especially racial prejudice, born out of historical experience. Perhaps lessons can be learned from history that will help in finding ways to work together for the common good.

 

The challenge of our times, especially in the upcoming 2020 election, is to continue our commitment to a creedal vision of America. We need to make a reality of the opening words of our Constitution, that “We the People” means all people who share the American Creed, regardless of race, ethnicity, or religion, and to constantly strive to unleash, in Lincoln’s words, “the better angels of our nature.”

 

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174387 https://historynewsnetwork.org/article/174387 0
Deep Division: Comparing Today and the 1960s America is a deeply divided nation. That fact may be the only thing that Americans of all racial, ethnic, and political groups can agree about. A Washington Post-University of Maryland poll conducted in late 2017 indicated that 70 percent of the American people think the country is “as divided as during the Vietnam War.”

 

This division manifests itself in political ways exemplified by the partisan impeachment proceedings and gridlock. The Democratic-led House of Representatives passed 298 bills in 2019, yet the Republican-led Senate refused to consider hardly any of that legislation. 

 

The country is also divided about race. A Washington Post-Ipsos poll taken in January 2020 indicates that 80% of African-Americans believe a white racist inhabits the White House and 70% believe their local police are racist. The FBI, having the responsibility to gather the statistics from state and local law enforcement agencies, reports that violent hate crimes have hit a 16-year high. Only once before has there been a greater spike in hate crimes and that was immediately following Sept. 11, 2001. President Trump’s harsh rhetoric further alienates people of color while satisfying his white right-wing base.

 

The period 1965 –1970 was also a time of deep racial and political division. How does that period compare with today? The decade that began optimistically gave birth to an invigorated federal government that took greater responsibility for social welfare and racial equality. When President Johnson signed the Civil Rights Act of 1964 that banned racial discrimination in public accommodations, a Gallup Poll indicated that 58% of Americans approved. Furthermore, President Johnson signed the Immigration and Nationality Act of 1965 that permitted greater immigration to America of people of color. 

 

However, beginning with Harlem in New York City in 1964, then Watts in Los Angeles in 1965 where 34 people were killed, and Newark (63 casualties) and Detroit (43 casualties) in 1967, riots heightened national tensions. A Harris poll in 1968 showed that only 38% of white Americans believed that the riots were caused by black people not receiving true equality while 65% of black people believed that was a major contributor. Moreover, the poll showed that the majority of white Americans believed that the solution to racial unrest was increased policing rather than increased education and employment opportunities for black people. 

 

In the late 1960s, Congressional political division existed as well, as Republican and Southern conservative Democrats picked up the law and order mantle while liberal Democrats and liberal Republicans pressed for equal opportunities and anti-poverty programs. 

 

Richard Nixon and George Wallace took advantage of the racial divisionand ran on law and order platforms in the 1968 presidential election. Nixon and Wallace combined for 57% of the popular vote. Their law and order message resonated with the “silent majority,” a reference to whites who obeyed the law and resented violent activism.

 

Yet, despite these similarities, there are key differences between the late 1960s and today. First, there was less political gridlock in the late 1960s. Despite ideological differences, bills passed in one house were genuinely deliberated by the other. Even in the violent year of 1968, Democrats and Republicans came together to pass the Fair Housing Act, banning discrimination on the basis of race, religion, national origin, and sex in the sale, rental, and financing of housing. The bill passed both houses of Congress by a wide margin with 29 of the Senate’s 36 Republicans voting for it. One of those Republicans supporting the bill was a young Texas Congressman named George H.W. Bush. Even after the election of Richard Nixon, Democrats and Republicans were able to come together to pass key legislation. For example, Congress created the Environmental Protection Agency in 1969.

 

In comparing and contrasting racial division in the late 1960s with today certain salient differences exist. In the 1960s, many black urban dwellers felt a deep frustration as the civil rights movement and anti-poverty programs did not improve their lives, which occasionally resulted in riots. Many white people reacted with a backlash against the racial disorder as they claimed that liberal, permissive leaders caused the turmoil. 

 

In today’s era, white people are reacting to changing demographics as the U.S. will soon become a majority-minority nation. That white reaction helped elect Trump, a president who many people of color view as racist while his white supporters view him as a hero who shares their frustration with a racially changing America. Numerous studies of the presidential election of 2016, including one in 2018 by Stanford University political scientist Diana Mutz, indicate that racial anxiety was a greater motivating factor for Trumpvotersthan economic concerns. In this atmosphere, some who harbor this racial hatred commit hate crimes.

 

Today the president and his party accept racial division and find it politically beneficial. The president and his party appeal to a narrow base. In the late 1960s the U.S. Government tried to create a country of racial equality promoting civil rights and anti-poverty programs. However, when racial violence emerged in big cities, national politicians arose (Nixon and Wallace) who saw divisive racial rhetoric (restore law and order) as a path to power. Today, people of color feel similarly alienated, but we have not experienced big city riots. As in 1968-69 many feel the U.S. President came to power and governs in a white racist manner. In that sense, we are in the same racially divided era as we were in during the last years of the 1960s.

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174390 https://historynewsnetwork.org/article/174390 0
The Myth of Thomas Jefferson’s Inscrutability Political discrepancies between Thomas Jefferson and Alexander Hamilton centered on contradictory interpretations of the U.S. Constitution.

 

Hamilton favored a slack interpretation of the Constitution—viz., what the Constitution did not strictly prohibit it sanctioned. One large issue for Hamilton was the Bank of the United States (BUS). There was no clause in the Constitution that forbade it, so it in effect had Constitutional sanction. Acceptance and implementation of such loose constructionism would give the Federal Government great power to act as it saw best to act, especially during times of crisis—today, the Basket Clause of the Constitution—and that would lead both to somewhat discretionary interpretations of the document and to government often deciding for the citizenry what is in their best interests without the consent of the people.

 

Jefferson thought that all Federal actions ought to be in strictest adherence with what was explicit and implicit (in the logical sense of “implicit” where “p implies q” entails that q is contained in p, just as “bachelor implies man”) in the Constitution—viz., what the Constitution did not address it forbade. Jefferson, thus, objected to BUS. It was unconstitutional precisely because the Constitution said nothing about Federal powers to enact such an institution. Jefferson, as a strict constructionist, was chary of a slippery slope: Allowance of one action in pursuance of a slack interpretation of the Constitution would sanction other actions, and soon lead to the sort of artificial aristoi that John Adams championed—a strong oligarchic, not a republican, government.

 

There is something, I argue, similar occurring for decades in Jeffersonian scholarship. All scholars admit the extraordinary difficulty of doing Jeffersonian scholarship. Lawrence Kaplan, in Thomas Jefferson: Westward the Course of Empire, acknowledges that Jefferson’s voice has been “the most persuasive in the first generation of the republic’s history,” but adds that “it might also have been the most paradoxical.” John Boles, in Jefferson: Architect of American Liberty, calls Jefferson a “tangle of apparent contradictions.” Both refer to Jefferson, for instance, being a champion of eradication of slavery, but a slave-owner, as well as being a champion of the people but living large. Joseph Ellis in American Sphinx states that Jefferson’s greatness lies in his capacity “to articulate irreconcilable human urges at a sufficiently abstract level to mask their mutual exclusiveness. … The Jeffersonian magic works because we permit it to function at a rarefied region were real-life choices do not have to be made.” Pete Onuf in The Mind of Thomas Jefferson goes so far as to admit the impossibility of knowing the true Jefferson. He spoke in so many voices to so many persons that disentangling the true Jefferson from the voices is a Sisyphean task.

 

The nodus is that scholars seldom approach “study” of Jefferson as a tabula rasa.In Merrill Peterson’s words, scholars seldom concern themselves with “the history Thomas Jefferson made,” but they come to Jeffersonian scholarship with their own political agenda. Thus, Peterson’s thesis in The Jefferson Image in the American Mind is about “what history made of Thomas Jefferson”—a critique of “biographies” of Jefferson. Where Peterson left off when he finished his watershed work in 1960, another scholar, with equal analytic acuity, could easily begin anew.

 

Since Peterson published his book, confusion concerning the mind of Jefferson has gotten greater, due to political posturing by conservative and liberal scholars, and appropriation, literally misappropriation, of the words of Jefferson to suit their political needs—that is, loose-constructionist approaches to Jefferson. Rightists use Jefferson to show that America is today a microcosm of, in the words of Voltaire, “le meilleur des mondes possible.” Leftists use Jefferson to show that the hypocrisy and racism they find in America today is the legacy of Thomas Jefferson.

 

The leftist vilification predominates in the scholarship on Jefferson since Peterson published The Jefferson Image. Scholars such as Onuf, Paul Finkelman, and Pauline Maier have been intent on showing the ordinariness, or even inferiority or viciousness, of Jefferson.

 

While Onuf bids scholars not to craft images of Jefferson as a god—Jefferson is not to be a synecdoche for America—he praises almost exclusively only those scholars whose views of Jefferson are denigrative. That is consistent with his synecdochic thesis, but in the end, we have not only leveled Jefferson to the ground, we have buried him in it. Jefferson’s vices are numerous and manifest; his virtues, few and not so apparent. Moreover, Onuf himself speaks of Jefferson’s “disguised motives and moral lapses” and concludes that Jefferson is “a monster of self-deception.” Those are not the judgments of someone whose aim is disclosure of a human Jefferson. Onuf’s Jefferson is less than human. There is no mention, for instance, of the decades of Jefferson’s life, spent in service to fellow humans.

 

Thus, the problem with the synecdochic thesis is Onuf’s inconsistent application of it. He disallows scholars to let Jefferson stand for all that is right in America, but he allows them to stand for all that is wrong with America.

 

Finkelman argues that Jefferson wished to eliminate the institution only because of “what slavery did to white people.”

 

The claim that Jefferson wished to eradicate slavery only because of its degenerative effects on whites flies in the face of what Jefferson writes in Query XVIII of Notes on the State of Virginia. There Jefferson writes of the degrading effects of slavery on Whites as well as on black people—“The whole commerce between master and slave is a perpetual exercise of the most boisterous passions, the most unremitting despotism on the one part, and degrading submissions on the other” (my italics)—a point Finkelman conveniently misses.

 

Finally, Maier states boldly that Jefferson was “the most overrated person in American history.”

 

Maier’s statement typifies a tendency in the secondary literature to use flippantly hyperbolic language to express a point. Yet her point, if true, implies thorough understanding of the mind and motives of Jefferson as well as her acquaintancy with every person in American history at the time of her utterance. And so, the claim is irresponsibly bold, and roundly false. What is most astonishing is that the superlative survived excision in the process of academic screening, which is supposed to be based on academic integrity.

 

This, of course, is merely a miniscule, but representative, sampling from the secondary literature. What I have said critically of leftist vilification applies to rightist sanctification, but very little of the rightist slant seems to find its way into today’s academic journals.

 

My point, in keeping with Peterson’s thesis, is that the “paradoxes,” “apparent contradictions,” and “contradictions” generally evanesce when we read Jefferson literally, circumspectly, and without contextomy—i.e., in a strict-constructionist manner. Jeffersonian scholarship is a Hydra’s head, mostly because we make it so.

 

Jefferson is a difficult task for any historian, because of Jefferson’s polymathy. That polymathy requires that prospective biographers know the literature that Jefferson read—especially the Greek and Latin authors that he cherished (e.g., Tacitus, Seneca, Antoninus, and even Plato, and here it is prudent to have knowledge of Greek and Latin), the moral-sense and moral-sentiment philosophers of his time (e.g., Shaftesbury, Kames, Adam Smith, and Hume), and the empiricists of his day (e.g., Bacon, Bolingbroke, Newton, Hume, Tracy, and Kames). It just might be that the defects that we find in Jefferson are due to our own ignorance, not due to moral maculae in the man.

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174389 https://historynewsnetwork.org/article/174389 0
How Can Historians Serve and Learn From the Public?

 

Annette Joseph-Gabriel is an Assistant Professor of French and Francophone Studies at the University of Michigan, Ann Arbor. She is the author of Reimagining Liberation: How Black Women Transformed Citizenship in the French Empire. Learn more about her work at her website. You can also find her on Twitter at @AnnetteJosephG

 

What books are you reading now?

 

I am currently re-reading Saidiya Hartman’s Scenes of Subjection: Terror, Slavery, and Self-Making in Nineteenth-Century America. I have also been sitting with Imani Perry’s Vexy Thing: On Gender and Liberation which has been helping me to work through historical and contemporary constructions and meanings of patriarchy. I have been thinking about liberation a lot in my work, but even as I find more and more scholarship on the different ways that we might define freedom, I am turning to these two texts for their lucid and clear-eyed dissection of the oppressions from which people seek liberation. 

 

What is your favorite history book?

 

This answer changes often, but a constant favorite is CLR James’s The Black Jacobins which, in addition to its analysis of the Haitian Revolution, is also an important text for thinking about the relationship between different pasts and between past and present.

 

Why did you choose history as your career?

 

If I am to position myself disciplinarily, I identify more readily as a literary scholar who works with historical texts. Throughout my academic training, many of my mentors emphasized productive conversations between literary and historical methods. Particularly generative discussions with my history professors highlighted for me the importance of grounding literary analyses in their historical moments even while thinking about the ways that texts strained against the limits of those moments. I think about disciplines like history and literature not so much in terms of gatekeeping but more as a set of methodologies that offer different ways to approach a given question. Of course, those methodologies are only a beginning to intellectual inquiry, but they can be a useful beginning, nonetheless.

 

What qualities do you need to be a historian?

 

First, a healthy dose of curiosity for sure! Sometimes the smallest detail in a scholarly work such as the weather on a given day can be the result of wading through reams of sources in the archives. But I see in those small details a desire to better understand and more completely reckon with the world into which a person, an idea, a debate was born. Second, it’s also important to be able to move back and forth between distance and proximity, especially when working through histories of trauma. Whenever I am immersed in primary sources about slavery in the archives, I get overwhelmed by the quotidian nature of extraordinary brutality. It helps to find preservation strategies in the midst of that violence. I usually wear headphones in the archives and listen to the most contemporary pop music I can find as a way of pulling me into a particular present. 

 

Finally, I think flexibility is a crucial quality. With archival work, I am never sure what I will find, and sometimes historical subjects defy our expectations, assumptions, hopes. They respond to the circumstances of their lives in unexpected ways that challenge the intellectual lenses we apply to our reading of those lives. It helps to be willing to see and hear historical subjects on their own terms.

 

Who was your favorite history teacher?

 

I took my very first history course proper in high school in Ghana. My teacher, Mr. Tay-Agbozo (affectionately called Mr. Tay) taught us much more than the history of the Cold War or the Biafran War. He taught us to be aware of the primarily US-centric focus of our history textbooks (yes, in Ghana too our history textbooks carried evidence of imperialism’s reach). He taught me the questions that I bring to every set of historical documents I examine, notably “who is constructing this narrative for me?” and “what are the stakes of this construction?” Since then, professors Kenda Mutongi, Shanti Singham, Tiffany Patterson and others have modeled for me how to undertake historical work that troubles received ideas about national borders and supposed separations between scholarship and community work.

 

What is your most memorable or rewarding teaching experience?

 

Teaching The Book of Night Women by Marlon James remains my most rewarding teaching experience. It’s a difficult book to teach because students are initially apprehensive about the language, and the historical period of slavery seems to them so far removed from their own contemporary realities. The first time I taught this book I was apprehensive too. I was skimming through my lesson plan as students walked into the classroom and to my surprise they launched into discussion before their butts hit their seats. I initially thought they were discussing the latest episode of a TV series. They were so intrigued by the plot and were quizzing each other on why the protagonists, enslaved women on a plantation in Jamaica, made the choices they did. When I realized that their conversation was hitting all the points that I had in my lesson plan I just pulled up a chair and listened, only offering direction once in a while. I learned so much from my students that day! Those class sessions are the most rewarding, when my students are active contributors to the knowledge creation process.

 

What are your hopes for history as a discipline?

 

This is a difficult question because as I said before, my academic training is not in history and so I see myself as a bit of an interloper in these conversations. But I am really intrigued by the different ideas about what constitutes public history, especially as those ideas are being challenged and refined on social media. My hope for the discipline is that we continue to have conversations about what it means for the discipline to engage with the public. Who are these publics? What do they need? How might we serve them, learn from them? And as always, what are the stakes of our interventions? 

 

To give a concrete and recent example, the 1619 project in the New York Times in August was hailed for the needed visibility that its wide circulation brought to how fundamental slavery was to this country’s foundations. But it also raised questions about who and what this framework of foundations obscures and what it says about political imaginations that tether certain histories to the nation. These public conversations were often contentious, but they remain necessary. My hope for the discipline is that these conversations will continue and will be undergirded by intellectual honesty and rigor.

 

Do you own any rare history or collectible books? Do you collect artifacts related to history?

 

I do not collect rare books or artifacts. I have a difficult relationship with archives because I once came across my grandfather’s personal (very, very personal) letters in the national archives in Ghana and it was such a surreal moment for me. The letters, like many that I relish “discovering” in my own archival work, were never meant for me or for some random historian to find. I had to balance that knowledge with my desire for archives that give us more access to the lives and voices of marginalized people. 

 

Creating a personal archive of history collectibles would come with similar tensions between the impulses of private holdings and public knowledge and would feel just as weird, I think. That said, I do have one artifact that I cherish very much: a bound selection of old articles from the Martinican newspaper France-Antilles. It was sold in Martinique as one of those revenue-generating collectors’ items. My mother-in-law, whose formal education ended in middle school, gifted me a copy with these exact words that I will not soon forget: “I don’t know what this is, but it looked like something that would interest you.” In my book, I write about my mother-in-law’s post World War II experiences in Martinique and I remain struck now, as I was when she gave me that bound collection of old newspaper articles, how much of this history she has lived, and how much she has contributed to this historical artifact whose form eludes her. 

 

What have you found most rewarding and most frustrating about your career?

 

I have too much to say about the frustrations, especially as a Black woman and a non-US citizen on the tenure track who is always navigating the failed egalitarian promises of academic institutions and their bureaucratic processes. What I find most frustrating is the constant feeling of being held hostage to evaluation processes and criteria that cannot account for me, my presence, my methods, and the stories I am invested in telling. What I find most rewarding is that I have been fortunate to have incredibly generous and brilliant mentors like Tracy Sharpley-Whiting and Trica Keaton who help me to make sense (and recognize also the nonsense) of the academy. It has been rewarding for me to emulate those who I see resisting the conflation of “this is what I do” and “this is who I am.” In other words what I enjoy most about my career is being able to leave it behind at the end of my working day and go off and be other things.

 

How has the study of history changed in the course of your career?

 

Yikes, I don’t think I’ve been around long enough to chart any sort of meaningful or significant evolution. 

 

What is your favorite history-related saying? Have you come up with your own?

 

There is a Ga proverb from Ghana that says, “What has never happened before is behind the ocean.” It’s a bit of an awkward translation but it’s basically another rendering of the adage that there is nothing new under the sun. I like it for its reminder of the importance of history for understanding the present and the future. There is another Ghanaian proverb that says, “Do not look where you fell, but where you slipped.” It’s not explicitly a history-related saying but I think it’s such a great way to think about what we mean by learning from history. It’s great to think about where in the past you turn to if you are trying to understand an outcome in the present. 

 

What are you doing next?

 

Too many things, probably. I am working on my second book project. It is about children and slavery, and I find it both exciting and heartbreaking work. I am also collecting material for my third book project on French Antillean feminisms. The impetus for this book is different than for the second book. I want to teach books about feminist thought in Martinique and Guadeloupe and I can’t find any monographs that meet this need. It feels like there might be some degree of hubris in my saying, “well, why don’t I just write one, then?” But after writing my first book, I think this is something I can do. But above all, I am reading, reading, reading. My plan is to devote the next year or so to just reading a lot and slowly. The tenure track has deformed my reading practice into one of necessity. But I got into this profession because I wanted to read more and learn more. It is an incredible privilege to find the time to do this as a sustained practice and so I am very glad to have this next year to do just that. You began by asking me what I am reading now so it seems fitting to end with what I will be reading next. At the top of that list is To Exist is to Resist: Black Feminism in Europe edited by Akwugo Emejulu and Francesca Sobande, Transatlantic Feminisms: Women and Gender Studies in Africa and the Diaspora edited by Cheryl R. Rodriguez, Dzodzi Tsikata and Akosua Adomako Ampofo, and Sasha Turner’s Contested Bodies: Pregnancy, Childbearing, and Slavery in Jamaica.

 

 

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174388 https://historynewsnetwork.org/article/174388 0
A Conversation with the Eastham Historical Society

 

The Eastham Historical Society is a locally funded society that provides Cape Cod and residents of Eastham, Massachusetts a look into the past with their impressive collection and variety of landmarks.  This is a conversation with Eastham Historical Society’s Secretary Sharen Shipley. 

 

Jonathan Montano: Eastham is a very small town tucked away on Lower Cape Cod.  Not many Americans would recognize it by name.  But what can local history teach us about the past?Sharen Shipley: Eastham was incorporated as a town in 1651, only 31 years after the landing of the Pilgrims providing a treasure trove of history.  The mission of Eastham Historical Society (EHS) is “Preserving Eastham’s Past for Eastham’s Future”.

 

Montano: What role has Eastham and Cape Cod played in broader history?  How can that alter our understanding of broader historical narratives?

Shipley: Many people are not aware that the Pilgrims landed in Provincetown and remained there a few months before sailing and landing in Plymouth. While in Cape Cod Bay, a shallop made shore in Eastham where the Pilgrims had their first encounter with the Nauset’s of the Wampanoag Tribe on what is now First Encounter Beach in Eastham.

 

Montano: Eastham clearly has a very long history.  What type of documents do you have in your archives?  What collections are you most interested in?

Shipley: EHS Archives contain extensive genealogies of the first 7 families of Eastham; as well as ancient maps, ships logs and historical documents.  Our collection also contains numerous arrowheads, tools, grinding stones and implements of the Natives. Our permanent collection contains many antiquities of Captain Edward Penniman, whose home is on Fort Hill in Eastham.  We also have many items salvaged from Henry Beston’s Outermost House.  This house and Henry’s book provided the impetus for President Kennedy to create the Cape Cod National Seashore in 1961.

 

Montano: What can a visitor see at one of your historical sites, such as the School House or Swift-Daley complex?

Shipley: The 1869 Schoolhouse Museum is the original one room school where grades one through eight were taught until the opening of Eastham Elementary School in 1936.  The headmaster’s desk, globe, blackboard and regulator clock are original to the building.  A wood stove and desks are from the period.  This year the schoolhouse celebrates its 150th anniversary.The Swift-Daley House so named since one of its owners was the founder of Swift Meats was built in 1741 by Joshua Knowles and features a large central fireplace for cooking and heating, as well as many original furnishings and clothing.  The house also features a bowed roof typical of homes built by ship’s carpenters.  Dill Beach Camp built in 1936 was the only dune beach camp to survive the infamous blizzard of 1978.  After the storm, the Dill family floated the camp across the marsh to their property  where it remained until donated to EHS by Thomas Dill in 1995.  It is a perfectly preserved duck hunting and fishing camp. Ranlett Tool Museum is filled with hundreds of antique tools and implements, many unique to the area; as well as a working forge.

 

Montano: What can local institutions such as EHS teach us about American History?

Shipley: EHS currently has special exhibits honoring 12,000 years of the Native Peoples; as well as the Mayflower compact and passenger list.  Three who sailed on the Mayflower are buried in the Cove Burying Ground on Route 6 in Eastham.  EHS is privileged to be the caretakers and keepers of Eastham’s historical antiquities and artifacts.  Our Archives are open from spring through mid November on Tuesday afternoons.During July and August, we present a Thursday night speaker series.  We have featured speakers as diverse as Chris Macort, a Marine Archeologist on the ongoing exploration of the Whydah, Marcus Hendricks and Todd Kelley of the Wampanoag tribe and revolutionary war descendant, Art Richmond on the evolution of the Cape Cod house, Don Wilding on Henry Beston’s life and the Outermost House; as well as countless others.

 

Montano: How does EHS add value to the local community?  How can the public engage with your sites/archives/lessons?

Shipley: Thanks to the dedicated volunteers, our historical sites are open for the appreciation and education of Eastham’ residents and visitors during the summer months where docents give tours and provide historical context to our visitors.  Descendants of families are invited to visit our Archives to research their family’s genealogy. 

 

 

We welcome all to visit our museums, Archives, special events and Thursday night speaker series.  We also encourage Cape Cod full and part-time residents to join the Society and get involved.  Please visit our Facebook site  or www.easthamhistoricalsociety.org .

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/173615 https://historynewsnetwork.org/article/173615 0
In Defense of Public Institutions

At the end of 2019, Fortune reported that the United States Postal Service could be privatized in 2020. As the Fortune article highlighted, the U.S. Postal Service has a long and rich history, going back to the days of Benjamin Franklin. Its legacy of privacy was guaranteed by the leadership of George Washington and James Madison. But we now live in different times. While the generations of the past sought to establish public institutions, in our era, there is a relentless push for more privatization, far beyond the postal service

 

In the twenty-first century, the funding for public universities has been increasingly imperiled and challenged at state and federal levels. Since his election, Trump’s administration has sought ways to privatize the National Parks or sell off large portions of public lands. In January, Jonathan and Destry Jarvis warned that the National Parks are being dismantled. There is, at this moment, a wide-ranging and accelerating assault on public institutions in a dramatic bid for more privatization. The fact that it has been so feebly resisted suggests that the significance of public goods and institutions has been forgotten.

 

The arguments against public institutions and services are fairly straightforward. In many sectors, private companies can, and do, work more efficiently and lucratively. A privatized postal service would generate tax revenue. A sale of public lands would positively impact the treasury. Less funding for public universities would mean less of the cost of education would be borne by the public at large. The objective of a more privatized system is a leaner government and a more robust economy. 

 

In her 2015 lecture, “What Are We Doing Here?” Marilynne Robinson described this type of thinking: “The logic seems to go like this: To be as strong as we need to be, we must have a highly efficient economy. Society must be disciplined, stripped down, to achieve this efficiency and make us all better foot soldiers.” Everything must be weighed against private profit and subject to competition, so that the U.S. will have a more dynamic economy poised for maximum success in the international arena.

 

Advocates of privatization hold out the promise of economic strength, but reality suggests a less certain vision of a more private future. Bureaucrats are not the only ones who struggle with budgets. Every year, many private companies cease to be profitable or even to exist. Many successful private companies pay no more in taxes than tax-exempt institutions. Personal experience also shows how often private corporations are just as inefficient and unresponsive as governmental bureaucracy. Anyone who has ever needed an insurance company to pay out for a big bill or who has waited on a phone line to speak to a living customer service representative knows that the government has no monopoly on being unresponsive or obtuse. We are not guaranteed customer satisfaction or great tax revenue under either model.

 

Perhaps because public institutions and initiatives have not produced wealthy, charismatic CEOs, we underestimate the ways that they have enriched us. In the 1890s, the U.S. Postal System initiated “Rural Free Delivery,” an innovation which helped raise the rural standard of living. Though it cost the government money, as shown in Robert Gordon’s masterful book, The Rise and Fall of American Growth, Rural Free Delivery fueled the rise of the great mail-order companies and helped spur the building of highways. Integrating rural America into the economy was not just a kindness to rural places, it was a spur to nationwide economic growth. The massive expense of the G.I. Bill after World War II placed the burden of educating veterans on the public’s shoulders. It paid out in giving the United States some of the best educated and highest earning workers in the world. In the cost-benefit analysis of investing in the public, we must seriously consider the benefits.

 

But despite the way that privatization has been framed by advocates, the most significant distinctions between public and private are not efficiency or profitability—they are accountability and access. Public institutions are, ultimately, under the supervision of the public itself. Citizens have levers of control within government that do not exist in the private realm. Citizens have a right to vote for senators and presidents. In some states, voters can directly amend their state constitution. Customers do not choose CEOs. Only shareholders have voting rights in private corporations. Your Congressman’s work email and phone number are public information. You will not find “contact information” under the Amazon “help” tab. Your taxes entitle you to certain goods and services. Your personal spending entitles you to very little. It is legal for airlines to sell you a seat on a plane that they have already sold to someone else. No amount of customer loyalty guarantees you any access to good customer service and you cannot “vote the bums out.” Neither are most companies small enough to be much affected by your decision to shop elsewhere.

 

Since our country began, we have struggled our way toward a fuller realization of the principle that citizens have equal rights under the law. Consumers have only their purchasing power. Private corporations serve their owners and shareholders. Public institutions are intended to “promote the general welfare.” The National Parks are for everyone. Public transportation is for the entire public. Public institutions exist because citizens are stakeholders, not in a company, but in an entire country, with a mutual interest in each other’s success and well-being. Republican or Democrat, if we initiate changes and policies that are designed for the benefit of the few at the expense of the many, or at the expense of less appealing consumers, we will find ourselves only more divided.

 

What will we do with our inheritance of public institutions, our national parks, our public universities, our postal system? Decisions about public lands and institutions should be the product of public conversation, not hasty or partisan initiatives. There may be need for reform, or room for some privatization. But there are certain things that, if we give them up, we will never get back—certainly not at cost or in the same condition. 

 

Theodore Roosevelt described America’s public lands as “the most glorious heritage a people ever received, and each one must do his part if we wish to show that the nation is worthy of its good fortune.” Should we sell off that heritage in piecemeal? In the book of Genesis, a hungry Esau sold his birthright to his brother Jacob for a bowl of lentil stew and some bread. His was a shortsighted decision that “showed contempt for his rights as the firstborn.” We should not give up too easily the public goods and institutions that earlier generations left behind for us. To do so is to show contempt for the generations before and after us.

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174328 https://historynewsnetwork.org/article/174328 0
“Patriotic” Veterans Only, Please

 

When news broke back in November 2019 that U.S. Army Lt. Col. Alexander Vindman would be offering testimony at the House impeachment inquiry of President Donald Trump, it didn’t take long for many conservative pundits to attack the National Security Council official’s patriotism.

 

Fox News stalwarts Laura Ingraham and Brian Kilmeade led the charge, publicly intimating that Vindman, despite being wounded in Iraq and earning a combat infantryman’s badge, might be a Ukrainian double agent. More recently, Tennessee Sen. Marsha Blackburn shared a tweet claiming: “Do not let the uniform fool you. He is a political activist in uniform.”

 

While perhaps an indication of our current toxic political environment, the attacks on “unpatriotic” veterans like Vindman—which have continued unabated into this year—actually have a long and checkered history in post-World War II America. Citizens may reflexively honor their countrymen and women who have served—and continue to serve—our nation in uniform, but they too often are eager to attack those who don’t fit their preconceived notions about what it means to be a veteran. 

 

For those who break socially sanctioned views of the “patriotic” veteran, public wrath can be as swift as it is outraged. Along with Vindman, two examples illustrate this point: Ron Kovic, Vietnam War veteran and author of the bestselling memoir Born on the Fourth of July, and recently retired U.S. Army major Danny Sjursen.

 

Kovic is paralyzed from the chest down, wounded in Vietnam as a 22-year old who believed he was defending the world against the evils of communism. Sjursen suffers from PTSD, the result of losing soldiers under his command in combat. 

 

Both love their country, though not in the jingoistic way perpetuated by Lee Greenwood songs or NFL pre-game flyovers. And both have suffered for speaking out against war, for not playing their assigned roles as the “patriotic” veteran.

 

A popular narrative holds that after the Vietnam War, in part because of their collective guilt, Americans became more supportive of veterans, even though they might oppose military interventions overseas. Yet a comparison of Kovic and Sjursen suggests otherwise.

 

After returning home from Vietnam, a grievously-wounded Kovic suffered through a painful recovery in VA hospitals as he increasingly turned against the war. Though wheelchair bound, he was manhandled while participating in antiwar protests, even being forcibly jolted about by Secret Servicemen at the 1972 Republican party national convention. When he, and fellow Vietnam Veterans against the War members, started chanting “Stop the war, stop the bombing,” a delegate wearing a “Four More Years” button ran up to Kovic and spat in his face.

 

Kovic, however, was far from alone. Other antiwar Vietnam veterans endured similar abuse. As protest marchers demonstrated in New Jersey over Labor Day weekend in 1970, a World War II veteran shouted at them: “You men are a disgrace to your uniforms. You’re a disgrace to everything we stand for. You ought to go back to Hanoi.” Apparently, those who fought in war were not allowed to speak out against war.

 

Nearly fifty years later, Sjursen published a piece in the Los Angeles Times condemning U.S. policies overseas. The op-ed asked readers to consider the consequences of our nation being “engaged in global war, patrolling an increasingly militarized world.” Sjursen’s tone was raw, as he passionately sought alternatives to a foreign policy committing the United States to seemingly endless war.

 

The public reactions were not simply ad hominem attacks against the Iraq-Afghanistan veteran, but denigrating, hard-hearted, and downright malicious. One popular Facebook page that shares defense-related news posted Sjursen’s op-ed. Of the roughly 200 comments, nearly all were disapproving. Some called the major “pathetic,” “bitter,” and “sour grapes,” while one critic, noting that Sjursen likely had PTSD, claimed he was a “progressive” who had just “decided to become a victim.” One detractor evidently spoke for many—“the Army is better off with him in retirement.”

 

The experiences of Kovic, Sjursen, and Vindman are noteworthy and should force us to ask some serious questions about the relationships with veterans from our armed forces. From where did our expectations about veteran patriotism and conformity emerge and harden? Has the current fissure in our domestic politics extended to the veteran community, so that if we disagree with members of our all-volunteer force, we reflexively assail their devotion to country?

 

This is not at all to argue that a military uniform automatically confers upon its wearer an elevated public status in which they are honored regardless of their actions. The recent case of Edward Gallagher, the Navy Seal accused of murder yet acquitted of charges, demonstrates the importance of our armed forces being held to the highest of standards when conducting military operations abroad. When President Trump personally intervened in Gallagher’s case, former NATO supreme commander, Adm. James Stavridis, worried publicly—and rightfully so—that the commander-in-chief’s decision would diminish “American moral authority on the battlefield to our detriment internationally.”

 

Gallagher, however, wasn’t speaking out against U.S. foreign policy or debating issues of national security, rather engaging in a social media campaign to protect his own self-interests. All the while, the same conservative media outlets which so vigorously attacked Vindman championed Gallagher’s cause with equal vigor. Might there be a connection?

 

There seems to be something in our current state of domestic politics wherein veterans automatically earn an entitlement to public admiration, while simultaneously losing some of their basic rights of citizenship, particularly their right to voice dissent. Active-duty servicemembers long have had to forfeit some of their freedoms of speech, the Uniform Code of Military Justice (UMCJ), for instance, prohibiting “contemptuous words” against public officials like the president or members of Congress. 

 

Yet to return to Kovic and Sjursen, their dissent against American foreign policy was censured not from within the military, but from those outside who sought to define and then rigorously police the expectations for “appropriate” veterans’ behavior and politics.

 

There’s a critical point to be teased out in all this, which is that people without the experience of directly fighting wars and serving in uniform are disallowing actual veterans from a conversation about future wars and armed conflict.

 

This silencing has extended to any dissent that is not in favor of militarism, even when it comes from the very people whose direct wartime experiences have shaped their opposition to intervention, armed nation-building, and war more generally. Who does that leave, we might ask, to speak out against U.S. military action overseas? Has it become impossible to question and debate our national security strategy while being “patriotic” at the same time?

 

It seems important, then, that we probe more deeply the contradictions in the relationship we have with our veterans. Despite their often audacious communal outpouring of support, far too many Americans prefer to laud only those vets who are unabashedly patriotic and completely silent about any concerns they have with U.S. foreign policy.

 

Of course, the unfortunate reality is that we want to be inspired by stories of young men and women’s “perseverance through combat.” We are heartened by tales linking national security and pride to the resolve of hearty individuals willing to sacrifice for the greater good. And, in the same vein, we are disappointed when soldiers, like one Vietnam veteran, return home and share that they “felt it was all so futile.”

 

In short, we want to find meaning in war, a sense of purpose that makes us feel better about ourselves and our nation. To view war as less than ennobling smacks of unpatriotic apologism. Thus, we attack the iconoclastic veteran-messenger without listening to their actual message. We decry the “broken” vet who somehow makes us look bad by not waving the red, white, and blue.

 

But patriotism, like courage, comes in many forms. Perhaps Americans of all persuasions, political or otherwise, might challenge themselves to be more accepting of veterans’ voices who don’t accord with their own “patriotic” views.

 

Kovic, Sjursen, and Vindman fought because their nation asked them to. The least we can do is allow them to speak up when they return home and respectfully contemplate what they have to say without attacking their “patriotism.”

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174320 https://historynewsnetwork.org/article/174320 0
75 Years After the Dresden Bombings

 

Every year on February 13th, in the cold of the late evening, all the church bells in Dresden ring out at a pre-determined moment; and onlookers in winter coats gather in the old squares to light candles. The bells – their chaotic clamour echoing through the streets for twenty minutes – represent air raid alarms. Standing amid the crowds, I found my gaze drawn up into the dark sky, envisaging the approaching bombers; imagining the dazzling green and red of their initial marker flares falling through the dark. In 1945, the children of Dresden thought these were like ‘Christmas trees’.

 

History in this city is not an abstract academic pursuit; instead, it is palpable and passionate. It matters enormously. This year marks the 75th anniversary of the horror that made Dresden – deep in eastern Germany - a by-word for annihilation. Before the war, Dresden had been known as ‘Florence on the Elbe’, because of its rich array of baroque architecture, its beautiful churches, galleries and opera house, and its strong tradition of art. Surrounded by gentle hills and rocky plains, Dresden always seemed a step apart. Even when it fell under the obscene thrall of Nazism.

 

Throughout the war, Allied bombers had raised infernos in cities all over Germany; but few in Dresden imagined that the British and Americans could turn upon the city in which – pre-war – they had visited and loved in such numbers. Dresdeners were mistaken. 

 

On the night of February 13th 1945, 796 British bombers, their flights staggered over a calculated two waves from 10 PM to 1 AM, unleashed thousands of incendiaries and high explosives that in turn created a super-heated firestorm that killed 25,000 people and reduced the old city to glowing, seething rubble. According to one on-looker, the assault ‘opened the gates of hell’. Women, children and refugees, huddled in inadequate brick cellars, began to bake to death. Others were overcome with poison fumes. Bodies were melted and mummified. 

 

The following day, against a sky still black with ash, American bombers conducted a further raid, dropping fire upon the city’s open wounds. 

 

In the years since, Dresden has sought not only to come to terms with the nightmare memories; but also to find understanding. History here has a clear moral purpose: to explain, and also to allow the truth to be seen. The debate over whether Dresden was a legitimate military target – or chosen for the purposes of atavistic vengeance – is still extraordinarily sensitive. All who tread here must tread carefully.  

 

Today, the restored and rebuilt streets of the old city are patterned with echoes that honour the dead: a statue to schoolboy choristers who perished; a perfect replication of the vast 18th century Frauenkirche, with its great dome and the original fire-blackened stones remaining at its base; a vast chunk of irregular masonry left in the pavement close by, inscribed with the explanation of how it came to be parted from the church. 

 

But there are also brass plaques in buildings and upon pavements commemorating Jewish lives lost not to the Allies, but to Nazi terror. There is an equal awareness that Dresden was under a pall of moral darkness long before the bombers came. 

 

Most valuably, the city archives are now filled with a wide array of personal accounts of the night – diaries, letters, memoirs. Here are hundreds of voices, some in writing, some recorded, telling extraordinary and vividly moving stories from a variety of viewpoints. All of this matters intensely because for a great many years in Dresden, remembrance itself was a battlefield. Indeed, in some quarters, it still is.

 

There are those on the extremes of the political far-right – in east Germany, but elsewhere too – who seek to appropriate the victims of that night for their own purposes; to compare Dresdeners to victims of the Holocaust, for instance, downplaying the persecution of the Jews and magnifying the suffering of gentile German civilians. The extremists want Dresden to be seen as an Aryan martyrs’ shrine. 

 

The people of Dresden are equally implacable in their determination that this should never be so. This is not the first time others have tried to hijack their history.

 

In the immediate aftermath of the war, the Red Army took control of the city and it became part of the Soviet-dominated German Democratic Republic; more totalitarianism. The Soviets had their own version of history to teach in the early years of the Cold War: that the bombing was due to the psychosis of hyper-aggressive ‘Anglo-American gangsters’. The destruction of Dresden was a warning that the Americans were remorseless. 

 

Meanwhile, in the west, the bombing became almost a parable of the horrors of war. An American reporter, just two days after the raids in 1945, inadvertently relayed to the world that the attack on Dresden was ‘terror bombing’. The reporter’s unconsidered phrase fixed history upon its course; in Britain, in that aftermath, there was a sharp, anguished reaction from the Prime Minister himself: Winston Churchill himself sent out a memo decrying the wanton destruction. 

 

The Air Chief Marshal of British Bomber Command, Sir Arthur Harris, had a nickname: ‘Butcher’. That he loathed the German people was no secret; his contempt bled through in memos sent to his superiors. The broad assumption was settled: that Dresden was victim to Harris’s bloodlust. For years, his name would carry the weight of responsibility for a decision that actually lay with committees higher up.  

 

Meanwhile, the city was granted literary immortality in the late 1960s by American author Kurt Vonnegut, who had been there that night as a Prisoner of War, and who was among those forced to excavate mutilated corpses in the days after. His novel ‘Slaughterhouse Five’ had the bombing as its dark bass line.   

 

But Dresden was not bombed because it was an ornament of German high culture that Sir Arthur Harris wanted crushed. It had genuine military significance. Memos and papers left by senior figures revealed the arguments and the wrangling.

 

First, the city was filled with military industry: Dresden had long been a city of scientific endeavour and its many factories were focused on delicate optical instrumentation and precision parts: in other words, it was at the advanced technical end of the German war machine. 

 

Additionally, the city was a teeming military transportation hub; troops shuttled through the busy railway junctions and through the streets on their way to the eastern front which, by that stage in the war, was a mere 60 miles away. 

 

The order to bomb Dresden had been triggered in part by a request from Soviet leader Joseph Stalin; he understood that an attack on this rail and road nexus would severely hamper German movements. Less forgivably, it was also a target because of the large numbers of refugees fleeing from the Red Army and passing through the city. It was calculated that bombardment would cause general chaos, furthering hampering the German troops.

 

But there was no such thing as precision bombing then; the plane crews – after so many missions exhausted, empty, afraid, braced for their own fiery deaths – simply got as close as they could to their designated targets.  

 

The gradual unwarping of history over the years has been largely thanks to the people of Dresden themselves. A careful historical commission a few years back sought to establish finally and definitively the number of victims of that night (Goebbels had deliberately set the number at 250,000).In addition to this, there British historians have helped trace the decisions that led to the city’s targeting. There has also been reconciliation and remorse. A British charity, the Dresden Trust, has done much to foster even greater historical sympathy and understanding.

 

None of this is morbid, incidentally; quite the reverse. The city is wonderfully lively and cosmopolitan and welcoming. But the anniversary of the bombing is always marked with the greatest reverence: there is a performance of the overpoweringly moving Dresden Requiem, and the clangour of those solemn bells. They cannot bring solace. But, as one old lady, a complete stranger, remarked to me after the Requiem: ‘This is for Coventry, too’. She was referring to the savage 1940 firestorm raised in that English midlands city by the Luftwaffe. Remembrance stretches across borders too.     

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174322 https://historynewsnetwork.org/article/174322 0
Racist Bullying in My Small Town Made National News. Here's What We Can Learn From How the Town Responded.

Never in my life did I think that my small town, Saline Michigan, would make national and international news, much less for something as negative as racism. Originally a tiny farming community, Saline has grown into a thriving town with a population of almost 9000 individuals.  My husband and I and our two small kids moved here about fifteen years ago, primarily for the excellent reputation of the Saline Public School system, quiet atmosphere and the availability of affordable housing. Overall, Saline has been a great place to raise a family. Given the progressive leanings of the city’s leadership and the welcoming vibe of the town, most Salinians were shocked by the explosion of negative national and international coverage the past few weeks. What started this negative media frenzy?

 

The (unwitting) center of the conflagration were our family friends, Adrian Iraola and his wife Lori. Our daughter Sophie worked several years for the Iraolas at their Mexican restaurant, Chela’s. Lori had also been her horseback riding coach at Saline High School and, more importantly, a great mentor and friend. The Iraolas are a successful American family, well liked by the community. They own three popular Mexican restaurants in the area. Yet, since Adrian Iraola is of Mexican heritage, their three children faced racist discrimination in the Saline school system.  The context of the incident reported by the media is relevant. The Iraolas were attending a school board meeting that aimed to address and combat racism and xenophobia in the school system following an incident on Snapchat, where black football players at Saline High School were called racial slurs. The school board meeting, organized by Superintendent Scot Graden, was packed with those sympathetic to the football players who had been insulted and eager to find solutions to this problem. During this meeting, Adrian Iraola stood up and recounted his son’s negative experience, who had been called in school by some of his peers “taco,” “enchilada” and “dirty Mexican”. In a moving statement, the father stated: “I went to bed to say good night. He was crying because of the abuse that he was enduring in this school system.” Suddenly, behind him a man named Tom Burtell blurted out: “Then why didn’t you stay in Mexico?” The crowd booed and one woman asked Mr. Burtell to leave. When the room calmed down, Adrian answered the rude question with a patriotic statement: “Why didn’t I stay in Mexico? Because this is the greatest country in the world.” 

 

In my opinion, this heated exchange highlighted the banality of racism, not, as some of the national media tried to portray, the exceptional racism of Saline, Michigan. Racist, sexist and xenophobic views can exist anywhere, even in progressive towns like Saline. In fact, had the focus of the press not been primarily (and sometimes exclusively) Burtell’s racist and xenophobic remark, and had the media followed up on this case, they would have highlighted the crowd’s strong disapproval of Mr. Burtell’s comment and covered the very well attended Saline Diversity Inclusion Rally on February 5, organized by Darin McLeskey and his wife Suzanna Emily, which encouraged frank discussion of racism and xenophobia. Brian Marl, the Mayor of Saline, along with several members of city council and students spoke at the event. The town’s leadership promised to take measures to address the problems of racism and discrimination. In her brief presentation, councilwoman Christen Mitchell emphasized that the aim of the community should be to be anti-racist not merely non-racist. 

 

So, if anything, Saline is exceptional in a good way. In light of this event, which dragged its image through the mud nationally, the town is determined to confront the root of the problem. Even if only a small minority of Salinians hold racist and xenophobic views, this community will do its best to make sure these discriminatory views are no longer swept under the rug, but faced, combatted and discouraged. 

 

Finally, I’m not sure that I fully agree with Adrian Iraola that America is the greatest country in the world. America is what we make it. Like him, I’m a first generation immigrant. Like his son, I was called names when I went to middle school in Columbus, Ohio. I hardly spoke English when I came to this country from (then Communist) Romania at the age of eleven, but I understood that being called a “Commie” and “Dracula” wasn’t a good thing. America is not just about national policies, over which we have limited influence. It’s mostly about what we make of each of our towns, like Saline Michigan. It’s about what we do to address problems like racism, sexism and xenophobia and how we shape our local communities to create better lives for everyone.

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174355 https://historynewsnetwork.org/article/174355 0
A Tale of the Great Migration

Photo Credit: Carol Rosegg

 

From the end of World War I to the 1960s, several million African Americans in the South migrated to the North to escape the Jim Crow racism of the South and violent oppression by the Ku Klux Klan and other hate groups. After World War I, too, there were thousands of new jobs for blue collar workers in northern cites. African Americans moved to a number of places, and New York City, particularly Harlem, became the new home for many of them. They were attracted by its culture, it unfolding literary scene and racial freedom, in addition to jobs. In the 1920s, they developed the Harlem Renaissance, a cultural era that has been shown in numerous plays, novels and movies. Harlem was home to the great bands and singers, plus noted writers.

 

Blues for an Alabama Sky, a new play by Pearl Cleage, tells the story of a handful of those people. It is a deep, rich play in which their stories are carried out against the cultural backdrop of the Harlem Renaissance. They are not part of it, just people who live on its perimeter, bystanders. It is an engrossing, poignant play that is a lesson in humanity – everybody’s. The play opened last night at The Theater Row, W. 42nd. Street and Ninth Avenue, in New York.

 

The plot is simple. It is the summer of 1930 and one of the Southern migrants, Angel, a female singer, can’t find work. Her best friend Guy, who lives with her and is gay, struggles, too. He is a fashion designer who just can’t sell what he thinks are wonderful new dresses. During the play, he is waiting for word on a large purchase from famed entertainer Josephine Baker in Paris. Their neighbor Delia finds love, we think, with a Doctor, Sam. All appears to be bliss for the newly arrived residents from the South, but there is trouble up ahead for all.

 

Angel is blinded by her new found love for a new man in her life, the tall, handsome, dashing Leland who just lost his wife and baby in the South. Guy’s big sales are delayed and delayed and delayed. Delia’s birth control clinic is shuttered. What will happen to them? Will their new found paradise in Harlem turn ugly? Will the Great Depression put them down, too, as it did most of the country?

 

Woven throughout the play are sub plots concerning a beating of Guy by neighborhood thugs who object to his lifestyle, the Doctors worries about his job, Delila’s struggle convincing the women of Harlem to use birth control falters and so does Angel’s on again – off again life in show business and men. The last boyfriend was a low-level mobster. Will they find happiness in Harlem as they so fervently believed?

 

What carries the play are the magnificent characters created by playwright Cleage. She starts right in with Angel, who at first appears to be very talented and very sweet and certain to soon be America’s valentine. Then she grows mischievous and deceitful and cold. Delia is one of those goody-two-shoes types that you want to hug all night. Doctor Sam is a saint, on the surface. Leland is a Prince, or is he?

 

Guy is one of the stage’s great characters. He is overly gay and just does not care who knows about it. He is funny, he is soft, he is tough. He is bold and bodacious. You just love this guy. You want him to sell his dresses and get out of New York and move to Paris, where life will be beautiful.     

 

Ms. Cleage has written a sterling play, and the end is far, far better than you expect, a real jolt.

 

In Blues for an Alabama Sky you learn a lot of history about Harlem and show business in black and white New York city in that era. The characters go to Small’s Paradise, that was, along with the Cotton Club and the Apollo Theater, one of New York’s legendary entertainment spots. You earn a lot about Josephine Baker, the fabled American singer who moved to Paris and found a new life. You learn about soaring unemployment in the black community in the Depression, which was just as high, and in some places higher, than it was in the white community. You get to know the Mafia and how it spread its tentacles into the night life of Harlem, running everything from prostitution to the clubs to the numbers racket.

 

Director L.A. Williams has done a fine job of making what could have been an ordinary play very engaging and entertaining. He gets fine work from Jasminn Johnson as Delia, Sheldon Woodley as Sam, Khiry Walker as Leland, Alfie Fuller as Angel and, whoa baby,  John-Andrew Morrison as a sensational Guy.

 

PRODUCTION. The play is produced by the Keen Company. Sets: You-Shin Chen, Costumes: Asa Benally, Lighting: Oona Curley, Sound: Lindsay Jones. The play is directed by L.A. Willlams. It runs through March 14.

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174354 https://historynewsnetwork.org/article/174354 0
Roundup Top 10!  

There Have Been 10 Black Senators Since Emancipation

by Eric Foner

Elected 150 years ago, Hiram Revels was the first.

 

Klobuchar’s hot dish and Warren’s heart-shaped cakes soothe our unfounded fear of women in office

by Stacy J. Williams

Female candidates are using their culinary skills to win elections.

 

 

A museum of women's history is long overdue -- and so are many others

by Thomas A. Foster

As we work to recover histories of marginalized people and subjects for a more inclusive national history presented in our museums, we must also change how the Grand Narrative is told in a museum that purports to cover all of American history.

 

 

What We Still Don’t Get About George Washington

by Alexis Coe

There continue to be ways to look with fresh eyes at our founding-est founding father.

 

 

Seeing Black History in Context

by Erin Aubry Kaplan

It’s the perfect time to get real about America’s shortcomings.

 

 

Utah women had the right to vote long before others — and then had it taken away

by Katherine Kitterman

As we remember the 19th Amendment, we shouldn’t forget what came before it.

 

 

Think the US is more polarized than ever? You don’t know history

by Gary W. Gallagher

To compare anything that has transpired in the past few years to this cataclysmic upheaval represents a spectacular lack of understanding about American history.

 

 

Abraham Lincoln Healed a Divided Nation. We Should Heed His Words Today.

by Edward Achorn

Abraham Lincoln repeatedly tops polls as our greatest and most revered president. But few people thought so on March 4, 1865, when he took the oath of office for the second time.

 

 

Eugenics is trending. That’s a problem.

by Caitlin Fendley

Any effort to slow population growth must center on reproductive justice.

</

 

Pete Buttigieg’s race problem

by Tyler D. Parry

He doesn’t truly understand the problems plaguing black America and their racist roots.

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174384 https://historynewsnetwork.org/article/174384 0
"Keep American Beautiful" and Personal Vs Corporate Environmental Responsibility David P. Barash is professor of psychology emeritus at the University of Washington; among his recent books is Through a Glass Brightly: using science to see our species as we really are (2018, Oxford University Press). 

Today, many of us accept personal responsibility for climate change and struggle to adjust our carbon footprints, while giving little attention to the much larger effects of major corporations and countries. We might want to learn something from how mid-century Americans concerned about litter were conned and co-opted by forces larger and more guilty than they.

 

Shortly during and after the Second World War, Americans produced much less garbage, having become accustomed to reusing items whenever possible and throwing things away only when absolutely necessary. This became a serious challenge, however, to newly emerging industries that yearned to profit from making and selling throw-away items, notably glass, plastics, and paper products. These industries accordingly launched a vigorous advertising campaign, inducing people to recalibrate their wartime frugality and start throwing things away after a single use. Americans were told that it was more hygienic, more convenient and (not mentioned), more profitable for those manufacturers who made and sold those items.

 

But by the early 1950s, the throw-away campaign had begun to backfire as Americans started seeing nearly everything as garbage.  The glass bottles that people used to rinse and reuse were increasingly thrown out of car windows, which had an unfortunate tendency to end up broken in a field, where grazing cows would either step on them and injure themselves or consume them and die. Dairy farmers became increasingly incensed, especially in Vermont, then as now a dairying state.

 

In response, Vermont passed legislation in 1953 that banned disposable glass bottles. Corporate America worried that this might be a harbinger of restrictions to come, so many of the bottle and packaging companies banded together in a counter-intuitive but politically and psychologically savvy way: They formed something called Keep America Beautiful. It still exists today, under a label that can only be applauded, not only for what it officially stands for but also for its social effectiveness.   Keep America Beautiful began as an example of what is now often criticized as “virtue signaling,” but in this case, the goal wasn’t simply to signal virtue or even to engage in “green- washing.”

 

Rather, the reason such behemoth companies as Coca Cola and Dixie Cup formed what became the country’s premier anti-littering organization was to co-opt public concern and regulatory responses by shifting the blame from the actual manufacturers of litter—those whose pursuit of profit led to the problem in the first place—to the public, the ostensible culprits whose sin was putting that stuff in the wrong place. Garbage in itself wasn’t the problem, we were told, and industry certainly wasn’t to blame either! We were. 

 

It became the job of every American to be a responsible consumer (but of course, to keep consuming) and in the process to Keep America Beautiful. At first and to some extent even now, legitimate environmental organizations such as the Audubon Society and the Sierra Club joined. Keep America Beautiful went big-time, producing print ads, billboards, signs, free brochures, pamphlets and eventually Public Service Announcements.

 

Keep America Beautiful coordinated with the Ad Council, a major marketing firm. People of a certain age will remember some of the results, including the slogan “Every litter bit hurts,” along with a jingle, to the tune of Oh, Dear! What Can the Matter Be: “Please, please, don’t be a litterbug …” Schools and government agencies signed on to the undeniably virtuous campaign. It’s at least possible that as a result, America became somewhat more beautiful but even more important, that troublesome Vermont law that caused such corporate consternation was quietly allowed to die a few years after it had been passed, and – crucially – no other state ever emulated it and banned single-use bottles.

 

But by the early 1970s, environmental consciousness and anti-establishment sensibilities began fingering corporations once again, demanding that they take at least some responsibility for environmental degradation, including pollution more generally. Keep America Beautiful once again got out in front of the public mood and hired a pricey, top-line ad agency that came up with an icon ad that still resonates today with Americans who were alive at that time: Iron-Eyes Cody, aka “The Crying Indian.”

 

Appearing on national television in 1971, it showed a Native American (the actor was actually Italian- American) with a conspicuous tear in his eye when he encountered trash, while a voice-over intoned, “Some people have a deep, abiding respect for the natural beauty that was once this country. And some people don’t. People start pollution. People can stop it.” In short, it’s all our fault.

 

Iron-Eyes Cody’s philosophy is reminiscent of Smokey Bear’s “Only you can prevent forest fires.” Of course, Smokey is right. Somewhat. Individuals, with their careless use of matches, can certainly precipitate forest fires, but as today’s wildfire epidemics demonstrate, there are also major systemic contributions: Global over-heating with consequent desiccation, reduced snow-melt, diminished over-winter insect die-offs that produce beetle epidemics that in turn leave vast tracts of standing dead trees, and so forth. Individuals indeed have a responsibility to keep the natural environment clean and not to start fires, but more is involved.

 

It is tempting, and long has been, to satisfy one’s self with the slogan “peace begins with me.” As logicians might put it, personal peace may well be a necessary condition for peace on a larger scale, but even if it begins with each of us, peace assuredly does not end with me, or you, or any one individual. The regrettable truth is that no amount of peaceful meditation or intoning of mantras will prevent the next war, just as a life built around making organic, scented candles will not cure global inequality or hold off the next pandemic.

 

Which brings us to global climate change. There is no question that each of us ought to practice due diligence in our own lives: Reduce your carbon footprint, turn off unneeded appliances, purchase energy-efficient ones, and so forth. Climate activist Gretta Thunberg is surely right to emphasize these necessary adjustments and, moreover, to model personal responsibility, for example, by traveling to the UN via sailboat. But she is also right in keeping her eyes on the prize and demanding that above all, governments and industry change their behavior.

 

Amid the barrage of warnings and advice about personal blame and individual responsibility, there is a lesson to be gleaned from the corporate manipulations that gave us Keep America Beautiful, and its subsequent epigones: Even as we are implored to adjust our life-styles and as we dutifully struggle to comply, let’s not allow such retail actions to drown out the need for change at the wholesale level, namely by those corporations and governments whose actions and inactions underpin the problem and whose behavior – even more than our own - must be confronted and overhauled. 

 

(Just after writing this piece, I discovered that some of its ideas were covered in an episode titled “The Litter Myth,” aired September 5, 2019, on NPR’s wonderful history-focused podcast, “Throughline”: https://www.npr.org/2019/09/04/757539617/the-litter-myth.  Anyone wanting a somewhat different take on this topic would do well to “give a listen.”)

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/blog/154314 https://historynewsnetwork.org/blog/154314 0
Trump the Great and Powerful Steve Hochstadt is a professor of history emeritus at Illinois College, who blogs for HNN and LAProgressive, and writes about Jewish refugees in Shanghai.

 

 

Donald Trump’s legal troubles have had an unexpected result – the proclamation of a view of the Presidency, in which Trump is legally untouchable and newly all-powerful. The head of the party of limited government has proposed a theory of American democracy, the unlimited Presidency, and the rest of his party has fallen into line.

 

The district attorney of Manhattan is trying to obtain Trump’s financial records, including tax returns, in the case about whether the payments to Stormy Daniels by his former lawyer, Michael Cohen, then reimbursed by Trump, were legal. William Consovoy, Trump’s lawyer, told the 2nd Circuit Court of Appeals that as President, Trump is immune from the entire judicial system. Consovoy said that if Trump shot someone on Fifth Avenue, he could be charged with a crime only after he is out of office.

 

After the hearings about the Mueller investigation, Trump said in a speech about the Constitution in July 2019, “Then, I have an Article II, where I have to the right to do whatever I want as president.”

 

Trump’s impeachment defense team based their case on a belief that the Presidency is much more than one of three separate and equal powers. Alan Dershowitz argued during the impeachment trial in the Senate that anything a President does to help his re-election is in the public interest, and thus not impeachable.

 

In a tweet about the recent legal case of Roger Stone, Trump insisted that he has a “legal right” to intervene in criminal cases.

 

Trump often asserts that he has the “absolute right” to do what he likes. Last April, he said he had never “ordered anyone to close our southern border,” but he could do it if he wanted. After he was criticized for revealing classified information to Russian officials in 2017, Trump said he has an “absolute right” to release such material to foreign powers.

 

The Associated Press has counted at least 29 times since his election that Trump has said he has an “absolute right” to wield executive authority. One example is his claim that he could end birthright citizenship by executive order, even though it is assured by the 14th amendment to the Constitution.

 

In June 2018, Trump tweeted a new absolute right in his Presidential theory: “As has been stated by numerous legal scholars, I have the absolute right to PARDON myself, but why would I do that when I have done nothing wrong?” Why would he bring it up if he had done nothing wrong?

 

Trump’s absolute right is tolerated with silence by the same Republicans who screamed “dictator” when President Obama issued an executive order offering deportation relief to DACA children. “Why is @BarackObama constantly issuing executive orders that are major power grabs of authority?” Trump tweeted in 2012. Gov. Chris Christie of New Jersey said, “He’s not a king, he’s not a dictator, he’s not allowed to do it himself.” House Speaker John Boehner said Obama was acting like a “king or emperor.” He said Republicans “will not stand idle as the president undermines the rule of law in our country and places lives at risk.” Now Republicans are nervously standing by as Trump declares himself above all law.

 

Trump appears to argue that he can exercise absolutely all powers that are not specifically denied to the President in the Constitution. But he goes further. Two key powers are explicitly vested in the Congress by the Constitution: power of the purse and power to declare war.

 

When the Congress did not appropriate funds for Trump’s beautiful wall, he ignored their decision, declared a national emergency, and then diverted funds which the Congress had appropriated for other purposes. Although majorities in House and Senate voted for a resolution to end the “emergency”, only a dozen Republican Senators out of 53 voted to reject Trump’s arrogation of new powers. One of those Republicans, Susan Collins of Maine, co-sponsored the resolution. She said, “The question before us is not whether to support or oppose the wall, or to support or oppose the President. Rather, it is: Do we want the Executive Branch — now or in the future — to hold a power that the Founders deliberately entrusted to Congress?” But she didn’t believe that idea strongly enough to vote for Trump’s impeachment.

 

Last week, the Senate passed the Iran war powers resolution that limits Trump’s ability to wage war against Iran. Eight Republicans voted with Democrats to pass the bill 55-45. The House passed a similar bill last month 224-194, with only 3 Republicans voting for it. Only a small minority of Republicans is willing to challenge Trump’s theory of the unlimited presidency.

 

The other modern example of a President asserting absolute rights in instructive. When David Frost interviewed Nixon in 1977, three years after he had resigned, he asked, “Would you say that there are certain situations - and the Huston Plan was one of them - where the president can decide that it's in the best interests of the nation, and do something illegal?” Nixon famously replied, “Well, when the president does it, that means it is not illegal.” The so-called Huston Plan was the plan hatched by Nixon and his advisors after he had been in office for 2 years and the bombing of Cambodia in 1970 had unleashed massive popular protests. Here’s the Plan: “The report recommended increasing wiretapping and microphone surveillance of radicals - relaxing restrictions on mail covers and mail intercepts; carrying out selective break-ins against domestic radicals and organizations; lifting age restrictions on FBI campus informants; and broadening NSA's intercepts of the international communications of American citizens.” FBI Director J. Edgar Hoover and the National Security Agency, who would have to carry out the illegal activities, convinced Nixon to abandon the Plan. But according to Nixon years later, those illegal actions cannot be illegal if he initiates them.

 

That appears to be where Trump is heading. The most illegal Presidents wish to abolish the possibility that the President can commit a crime.

 

It is not surprising that a president so unconcerned about Constitutional norms would try to add to his powers. It is disturbing and dangerous that the Republican Party as a body supports Trump going far beyond what they harshly denounced just a few years ago. Republican Congressmen and -women are sitting by while Trump amends the Constitution by fiat.

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/blog/154316 https://historynewsnetwork.org/blog/154316 0
Happy 200th Birthday to Susan B. Anthony (1820-1906)

 

Women have reason to celebrate in 2020. August 26 marks the centennial of the enactment of the Nineteenth Amendment, or the Susan B. Anthony Amendment, granting women the right to vote. February 15 is another important anniversary, the bicentennial of the birth of Susan B. Anthony. Although it is not an official national holiday, Anthony’s birthday has been an occasion that women have celebrated during Anthony’s life and after her passing in 1906.

 

The celebration of Anthony’s eightieth birthday on February 15, 1900, was described as being the greatest event in the woman suffrage movement. An attendee said, “There never has been before and, in the nature of things, there can never be again, a personal celebration having the significance” of this birthday party.

 

The reception began at the Corcoran Art Gallery in Washington, D.C., where Anthony was seated in a queen’s chair at the main entrance, at the head of a receiving line. For three hours, she shook hands with the thousands who amassed to extend congratulations. After greeting Anthony, the crowds made a pilgrimage to the marble busts of Anthony, Elizabeth Cady Stanton and Lucretia Mott on exhibition in a room of the gallery.

 

Glorious speeches and praises and gifts were presented. Among those who honored Anthony was Coralie F. Cooks. “It is fitting on this occasion, when the hearts of women the world over are turned to this day and hour,” she said, “that the colored women of the United States should join in the expressions of love and praise offered to Miss Anthony upon her eightieth birthday.…She is to us not only the high priestess of woman’s cause but the courageous defender of right wherever assailed. We hold in high esteem her strong and noble womanhood…Our children and our children’s children will be taught to honor her memory for they shall have been told that she has always been in the vanguard of the immortal few who have stood for the great principles of human rights.”

 

In addition to being an 80th birthday celebration, it marked Anthony’s retirement from the National American Woman Suffrage Association. Anthony said she gladly surrendered her place now that she was had reached 80, “ten years beyond the allotted age of man.” However, she did not plan to stop working. “I shall work to the end of my time,” she said.

 

Six years later, on Anthony’s birthday, February 15, 1906, a portrait bust of Susan B. Anthony, sculpted by Adelaide Johnson, was accepted by the Metropolitan Museum of Art of New York City. This was considered an especially “crowning achievement” since institutions almost never accepted portraits of a living person as part of their permanent collections. The portrait was placed “in symbolic recognition of where she [Anthony] belongs, at the head of the grand stairway as if to greet the visiting throngs.”

 

Six weeks later, Anthony slipped into a coma, and a few days later, she passed away.  “Among the men and women who have paid tribute to Susan B. Anthony since she closed her eyes in death March 13, not one owes her such a debt of gratitude as I myself,” wrote Mary Church Terrell, a personal friend of Anthony and a leader of the National Colored Women’s Organizations. “. . . To her memory has been erected a monument more precious than marble, more enduring than brass or stone. In the heart of a grateful race, in the heart of the manhood of the world she lives and Susan B. Anthony will never die.”

 

Friends carried on the tradition of celebrating Anthony’s birthday. In 1920, a few women went to the Metropolitan Museum to try to pay tribute at her marble shrine on her 100th birthday. A florist was supposed to deliver a wreath to the museum at 3:30, but when the women arrived at 3:40, the wreath had not arrived. At four o’clock, one of the women went into a rage and began hurling insults at her friends. Three minutes later, the florist arrived with the wreath. The little band of women marched past the museum clerks and up the staircase to take measurements for placing the wreath on the marble bust of Susan B. Anthony. Clerks approached the women and upon learning that they did not have the permission of museum authorities, a clerk phoned the office. A tall, gray-haired man appeared and asked the group to leave. One of the women recalled that the occasion of Anthony’s 100th birthday was “embarrassing and painful beyond words.

 

The grandness of the next birthday made up for the tragedy of the previous year. On February 15, 1921, the Portrait Monument to Lucretia Mott, Elizabeth Cady Stanton and Susan B. Anthony was unveiled in the Rotunda of the U.S. Capitol. More than a thousand women and men came to celebrate the woman’s marble sculpture, the passage of the Nineteenth Amendment, and the 101st birthday of Susan B. Anthony. 

 

However, after the glorious ceremony, the monument of women was removed from the room and placed further down below the dome in the Crypt. The Portrait Monument remained on the lower level for decades yet it still served as a rallying icon for women, particularly on Anthony’s birthday. An elaborate celebration took place in the Capitol Crypt on February 15, 1934, with more than 45 women’s organizations paying tribute to Susan B. Anthony’s leadership “in the movement for Equality for Women.” The Marine Band played as representatives of the organizations placed floral tributes at the base of the monument. The first speaker was Representative Virginia E. Jenckes, who was described as “the realization of Miss Anthony’s dream,” that is, “a woman in Congress, imbued with the Feminist’s point of view.”

 

Rep. Jenckes said, “I pay tribute today as a Member of Congress to the memory of Susan B. Anthony and her associates. To them should go the credit of bringing a woman’s viewpoint into national affairs, and we women who are privileged to render public service are inspired and guided by the ideals of Susan B. Anthony.”

 

Other speakers included Representative Marian W. Clarke of New York, Esther Morton Smith of the Society of Friends, Mrs. William John Cooper of the Association of American University Women, and Mary Church Terrell of the National Colored Women’s Organizations. The last speaker of the afternoon was sculptress Adelaide Johnson. 

 

Today we are privileged to honor ourselves by paying tribute to Susan B. Anthony on this anniversary of her birth. Invincible leader, first of the few of such hosts as never before moved to one call, the uprise of woman. A Leader for more than a half century in the crusade of half of the human race. Mother of the whole—demanding individual liberty.

 

…I might entertain you with reminiscences, as we were intimate friends for more than twenty years, the friendship beginning in 1886, but to give light or information is my role. . . . As Leader of the infant Woman Movement . . . [Miss Anthony] was pelted and driven from the platform when speaking, and experienced mob attack from the rabble. She was arrested for casting a ballot…In contrast, less than a quarter of a century later the same Rochester that hissed her from the platform in the sixties opened its newspaper columns wide in praise to “Our Beloved Susan.” Two thousand hands were grasped by the “Grand Old Woman.”

 

In 1935, Congress honored Susan B. Anthony with a gift on her 115th birthday --- they washed the statue of her (and Stanton and Mott). Of course, after the bath, they left Susan and her friends in the Crypt.

 

At Anthony’s birthday celebration in 1936, women noted the progress that had been made over the decades. Anthony had been the target of rotten eggs and even some women had resented her efforts for woman’s rights. It was recalled: “Women pulled their skirts aside when she passed and declared that her shameful behavior caused them to be ashamed of their sex.” In contrast, it was noted that by 1936 there was a First Lady who was an active leader instead of a mere tea pourer, a woman minister to a foreign country, a woman Secretary in the President’s Cabinet, and a woman director of the United States Mint.

 

In commemorating Anthony’s birthday, women have examined the past, celebrated their progress, and looked to the future. I hope women and men across the nation will continue this tradition on February 15, 2020, the 200th birthday of Susan B. Anthony. 

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174321 https://historynewsnetwork.org/article/174321 0
“Free College” in Historical Perspective

 

“Free college” is a visible and volatile issue in the Democrat candidates’ presidential campaign platforms. Bernie Sanders has showcased this proposal since 2016.  Others, notably Elizabeth Warren, have since echoed the chorus.  No Democratic candidate today can afford to ignore the issue, even if it means taking time to impose strict limits. Pete Buttigieg, for example, argues that federal tuition aid applicable at state colleges should be confined to assisting students from working and middle class families.

 

What is the historical origin of “free college” in presidential platforms? One intriguing clue comes from the 1947 report commissioned by then President Harry Truman, titled Higher Education for American Democracy.  The six-volume work drafted by a blue ribbon panel and chaired by the President of the American Council on Education, invited readers to look “Toward Equalizing Opportunity.” It made the bold case that “The American people should set as their ultimate goal an educational system in which at no level – high school, college, graduate work, or professional school – will a qualified individual in any part of the country encounter an insuperable economic barrier to the attainment of the kind of education suited to his aptitude and interests.”

 

Was this truly the federal government’s blue print for the “free college” campaign promises of 2020? The prose from 1947 resembles the proposals that speechwriters now craft for Democratic candidates heading in to the primaries. On close inspection, however, the 1947 Presidential Report was a false hope, or at very least, a promise delayed for today’s federal “free college” proposals.  That’s because President Truman put the report aside in order to focus on Cold War defense spending.  It was comparable to a script for a movie that never went into production by the president or Congress. 

 

Important components of the task force’s recommendations were fulfilled over the next two decades, but these reforms were implemented through scattered state initiatives with little if any commitment from the federal government.  Starting in 1951, major federal funding for higher education was concentrated in such agencies as the new National Science Foundation and the National Institutes for Health.  High powered federal support for research and development flourished while national plans for college access were tabled.

 

The historic reminder is that creating and funding colleges has been – and remains – the prerogative of state and local governments.  This legacy was reinforced in 1972 when several national higher education associations lobbied vigorously for massive federal funding that would go directly to colleges and universities for their operating budgets. Much to the surprise of the higher education establishment, these initiatives were rejected by Congress in favor of large-scale need-based student financial aid programs, including what we know today as Pell Grants (originally, Basic Educational Opportunity Grants) and numerous related loan and work-study programs.  This was a distinctively American model of portable, need-based aid to individual students as consumers.  The net result was that some, but hardly all, college applicants made gains in college access, affordability, and choice.

 

States, not the federal government, have been the place where issues of low tuition and college costs have been transformed into policies and programs. California was an important pioneer. Its original state constitution of 1868 included explicit provision that the University of California would not charge tuition to state residents.  This continued into the twentieth century.  The only modification came about in 1911 when the legislature maintained the “no tuition” policy but did approve a student fee of $25 per year for non-academic services.  The state’s public regional colleges voluntarily followed the University of California’s no tuition example.  The state’s junior colleges, which were funded by local property taxes as extensions of a town’s elementary and secondary schools, also were free to qualified residents. Less well known is that “no tuition” sometimes was a practice and policy outside California’s public institutions. Stanford University, for example, did not charge tuition when it first admitted students in 1891.

 

Nationwide, the “no tuition” practice gained some following elsewhere. Rice Institute, which opened in 1910, charged no tuition to any enrolled student until 1968.  Berea College in Kentucky and a cluster of other “work colleges” did not charge tuition, but did expect students to work on campus to defray costs. The more widespread practice at colleges in the United States was to keep tuition low. At the historic, private East Coast colleges such as Harvard and Brown, tuition charges remained constant between 1890 and 1910 at about $120 to $150 per year.  Indexing for inflation that would be a charge of about $3,200 today. All this took place without any federal programs.

 

In the decade following World War II California along with states in the Midwest and West invested increased tax dollars into public higher education, usually charging some tuition for state residents while working to keep price relatively low. Such was not the case in all states. Pennsylvania, Vermont and Virginia, for example, represented states with relatively low tax revenues for public institutions whose revenues depended on charging students relatively high tuitions, comparable to a user’s fee. 

 

After 1960 massive expansion of higher education enrollments and new campus construction in California meant that the state’s expensive “free college” programs state programs were unsustainable. By 1967 state legislator (and future governor) George Deukmejian stumped the state, making the case to local civic groups that California could no longer afford its “no tuition” policy at its public four-yearinstitutions. This added support to Governor Ronald Reagan’s 1968 campaign to impose tuition. The immediate compromise was technically to maintain “no tuition,” but to have public campuses charge substantial student fees. This ended in 1980 when the University of California first charged tuition of $300 per academic year for an in-state student, combined with a mandatory student services fee of $419. This trend continued so that by 2011-12 annual tuition and fees for a California resident were $14,460 and out-of-state students at the University of California paid $37,338.

 

An added state expense was that California had extended student choice by creating state scholarship programs that an eligible student could use to pay tuition at one of the state’s independent (private) colleges.  It became an attractive model nationwide by 1980 when at least thirty state legislatures had funded state tuition assistance grant programs applicable to a state’s public and private accredited colleges.

 

Declining state appropriations for higher education in California and other states signaled a reconsideration of “free college” as sound, affordable public policy. Why shouldn’t a student from an affluent family pay some reasonable tuition charge?  Did “no tuition” increase affordability and access from modest income families? Research findings were equivocal at best.

 

Connecting the historical precedents to current presidential “free college” proposals eventually runs into serious concerns. Foremost is that presidential candidates must be aware of the traditional sovereignty of state self-determination on higher education policies. In some states, taxpayers and legislatures decide to fund higher education generously. Other states do not.  Why should a federal program over-ride state policies? Why should taxpayers in a state that supports public higher education generously also be asked to pay high federal taxes to shore up state institutions in another state where legislators and voters do not?  How much federal subsidy should we give to lowering tuition pricewhen a state institution does not demonstrate that it has worked at keeping operating costs down? Also, “free college proposals” today may unwittingly limit student choice if the tuition buy downs are limited to selected institutional categories, such as public colleges and two-year community colleges.

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174324 https://historynewsnetwork.org/article/174324 0
A Gallon of Talk and a Teaspoon of Action, Then Label the Problem "Insolvable" and Plan Another Conference

There is an increasing element of the burlesque as world leaders jealously vie for the front row in photographs at annual summits where problems are solemnly addressed but seldom solved. They mulled over trade last June in Osaka. Then climate talks in Biarritz. At the end of January 2020 photographs were posted from Jerusalem where more than forty world leaders deliberated upon a malicious phenomenon that began in that ancient city two thousand years ago on a hill named Golgotha. What shall it avail, the latest speech-laden conference on this topic, when seventy-five years of intensive Holocaust education had no effect on Holocaust deniers and could not stop the resurgence of the virulent anti-Semitism that had infected Germany one hundred years ago? "Tiresome talk and tokens of sympathy without follow-up action!" scornfully wrote the German justice inspector Friedrich Kellner in his assessment of the impotent League of Nations and the short-sighted democratic leaders of his time who did nothing to thwart Adolf Hitler's plans for totalitarian rule over Europe. "Where were men who could recognize the reality?" asked Kellner. "Did they not see the tremendous re-arming of Germany when every German illustrated newspaper had pictures that exposed everything? Every small child here knew at least something about the armament. And the entire world looked on! Poor world!" As an organizer for the Social Democratic Party, Kellner had campaigned against Hitler and his Nazi Party throughout the entire time of the ill-fated Weimar Republic. When Hitler came to power, Kellner began a diary to record Nazi crimes and the German people's overwhelming approval of the murderous agenda. His outspokenness against the regime marked him as a "bad influence" and he was placed under surveillance by the Gestapo. His position as a courthouse administrator gave him some protection from arbitrary arrest. "The French watched calmly as Hitler re-armed Germany without having to suffer any consequences," wrote Kellner about the Allies' failure to respond decisively to Hitler's threats. The British were equally guilty. "Neville Chamberlain should have been a parson in a small village, not the foremost statesman of a world power who had the duty and obligation to immediately counter Hitler." "The Western nations," he declared, "will carry the historical guilt for not promptly providing the most intensive preventative measures against Germany's aggression. When German children were being militarized at the age of ten, and legions of stormtroopers and the SS were being formed, what did their Houses of Commons and Senates undertake against this power?" Kellner regarded Winston Churchill as one of the rare men who did see the reality, who might have preempted the war had he received the reins of government sooner. In his memoirs, Churchill labeled the six years of brutality and terror as "The Unnecessary War" -- a sobering and infuriating epitaph for the tens of millions of victims who needlessly lost their lives.  The justice inspector reserved a special contempt for nations that claimed neutrality while their neighbors and allies were under attack. Sweden and Switzerland grew rich providing Germany with raw materials. In June 1941, as Hitler ruled over a conquered Europe, Kellner derided Americans -- including the celebrated aviator Charles Lindbergh -- who insisted Hitler could be assuaged by diplomacy. "Even today there are idiots in America who talk nonsense about some compromise with Germany under Adolf Hitler. . . . Mankind, awake, and concentrate all your strength against the destroyers of peace! No deliberations, no resolutions, no rhetoric, no neutrality. Advance against the enemy of mankind!" Six months later, after the attack on Pearl Harbor, he wrote, "Japan shows its mean and dishonest character to the world. Will the isolationists in the U.S.A. now open their eyes? What a delusion these cowardly people were under. When you stand on the sidelines claiming neutrality during a gigantic fight for human dignity and freedom, you have actually placed yourself on the side of the terrorist nations." Despite his call for preemptive action against dictators plotting war, Friedrich Kellner knew war firsthand and abhorred it. In 1914, as an infantry sergeant in the Kaiser's army, he was wounded in battle. He highly valued diplomacy and preferred political solutions to the world's problems. For twelve years he campaigned politically as a Social Democrat. But Adolf Hitler showed the futility of diplomacy and politics when it comes to fanatics whose ideologies and dreams of dominion would undo centuries of civilization. We are seriously challenged by such types today. Protected by their totalitarian patrons, Russia and China, the leaders of Iran and North Korea spread disinformation, terror and chaos throughout the Middle and Far East. They openly threaten terror attacks -- and even nuclear strikes -- against the democracies.  Ironically, France and Germany, intent on maintaining business relations with these countries, have led the European Union these past three years in resisting the USA's renewed sanctions against Iran. As Winston Churchill said of the neutral nations in 1940, "Each one hopes that if he feeds the crocodile enough, the crocodile will eat him last." Conferences and speeches, and promises by world leaders to never forget the Holocaust, will neither impress nor dissuade modern aggressors. What was needed in Jerusalem was a unified pledge to immediately apply strong and unrelenting economic sanctions against these nations, and to encourage and assist the people in Iran and North Korea to stand up against their own dictators. And they needed to emphasize it would be the only way to deter further military action, such as the recent missile attack that killed the Iranian terrorist mastermind Qasim Sulemeini.  Our leaders should reflect on the observations of a German patriot who did his best to keep a madman from seizing power in his nation, and who saw with despair how the democracies let it occur. He wrote his diary as "a weapon of truth" for future generations, so they could stop their own Nazi-types.  "Such jackals must never be allowed to rise again," he said. "I want to be there in that fight."  

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174327 https://historynewsnetwork.org/article/174327 0
Jimmy Carter: The Last of the Fiscally Responsible Presidents

 

Popular impressions of Jimmy Carter tend to fall into two broad categories.  Many see him as a failed president who mismanaged the economy, presided over a national “malaise,” allowed a small band of Iranian militants to humiliate the United States, and ultimately failed to win reelection.  His final Gallup presidential approval rating stood at 34%—equal to that of George W. Bush.  Among postwar presidents, only  Richard Nixon (24%) and Harry Truman (32%) left office with lower approval ratings.  As the political scientist John Orman suggested some years ago, Carter’s name is “synonymous with a weak, passive, indecisive presidential performance.”  For those who hold this view, Carter represents everything that made the late ‘70s a real bummer.  

 

His supporters, meanwhile, portray him as a unique visionary who governed by moral principles rather than power politics.  They point out that he initiated a groundbreaking human rights policy, forged a lasting Middle East peace agreement, normalized relations with China, pursued energy alternatives, and dived headlong into the most ambitious post-presidency in American history.

 

Both perspectives have their merits.  Yet although even Carter’s admirers would rather ignore his economic record than defend it, popular memory of his economy is off the mark.  Contrary to the prevailing wisdom, by many indices the U.S. economy did relatively well during Carter’s presidency, and he took his role as steward of the public trust seriously.  He kept the national debt in check, created no new entitlements, and steered the nation clear of expensive foreign wars.  Whatever else one may think about the man, it is no exaggeration to say that Jimmy Carter was among the last of the fiscally responsible presidents.

 

Although there is no single measure for evaluating a president’s economic performance, if we combine such standard measures as unemployment, productivity, interest rates, inflation, capital investment, and growth in output and employment, Carter’s numbers were higher than those of his near-contemporaries Ronald Reagan, Richard Nixon, Gerald Ford, and George H.W. Bush.  “What may be surprising,” notes the economist Ann Mari May, “is not only that the performance index for the Carter years is close behind the Eisenhower index of the booming 1950s, but that the Carter years outperformed the Nixon and Reagan years.”  Average real GDP growth under Carter was 3.4%, a figure surpassed by only three postwar presidents: John F. Kennedy, Lyndon Johnson, and Bill Clinton.  Even though unemployment generally increased after the 1960s, the average number of jobs created per year was higher under Carter than under any postwar president.

 

Particularly noteworthy was Carter’s fiscal discipline. Although Keynesian policies were central to Democratic Party orthodoxy, Carter was a fiscal conservative who touted balanced budgets and anti-inflationary measures.  By and large, he stuck to his campaign self-assessment: “I would consider myself quite conservative . . . on balancing the budget, on very careful planning and businesslike management of government.”  

 

Under Carter, the annual federal deficit was consistently low, the national debt stayed below $1 trillion, and gross federal debt as a percentage of GDP peaked below forty percent, the lowest of any presidency since the 1920s.  During his final year in office, the debt-to-GDPratio was 32% and the deficit-to-GDP ratio was 1.7%.  In the ensuing twelve years of Reagan and Bush (1981-1993), the debt quadrupled to over $4 trillion and the debt-to-GDP ratio doubled.  The neoliberal policies popularly known as Reaganomics had plenty of fans, but in the process of lowering taxes, reducing federal regulations, and increasing defense spending, conservatives all but abandoned balanced budgets.

 

The debt increased by a more modest 32% during Bill Clinton’s presidency (Clinton could even boast budget surpluses in his second term) before it ballooned by 101% to nearly $11.7 trillion under George W. Bush.  Not only did Bush entangle the U.S. in two expensive wars, but he also convinced Congress to cut taxes and to add an unfunded drug entitlement to the 2003 Medicare Modernization Act.  During the Obama presidency, the debt nearly doubled again to $20 trillion. (Obama and Bush’s respective totals depend in part on how one assigns responsibility for the FY2009 stimulus bill.)  Under President Trump, the national debt has reached a historic high of over $22 trillion, and policymakers are on track to add trillions more in the next decade.

 

There are a few major blots on Carter’s economic record. Inflation was a killer.  Indeed, much of Carter’s reputation for economic mismanagement stems from the election year of 1980, when the “misery index” (inflation plus unemployment) peaked at a postwar high of 21.98.  The average annual inflation rate during Carter’s presidency was a relatively high 8% – lower than Ford’s (8.1%), but higher than Nixon’s (6.2%) and Reagan’s (4.5%).  The annualized prime lending rate of 11% was lower than Reagan’s (11.6%) but higher than Nixon’s (7.6%) and Ford’s (7.4%).  Economist Ann Mari May concurs that while fiscal policy was relatively stable in the Carter years, monetary policy was “highly erratic” and represented a destabilizing influence at the end of the 70s.

 

Carter’s defenders note that he inherited a lackluster economy with fundamental weaknesses that were largely beyond his control, including a substantial trade deficit, declining productivity, the “great inflation” that had begun in the late 1960s, Vietnam War debts, the Federal Reserve’s expansionary monetary policy, growing international competition from the likes of Japan and West Germany, and a second oil shock.  “It was Jimmy Carter’s misfortune,” writes the economist W. Carl Biven, “to become president at a time when the country was faced with its most intractable economic policy problem since the Great Depression: unacceptable rates of both unemployment and inflation”—a one-two punch that came to be called “stagflation.”

 

In response, Carter chose austerity.  Throughout the 1970s, Federal Reserve chairmen Arthur F. Burns and G. William Miller had been reluctant to raise interest rates for fear of touching off a recession, and Carter was left holding the bag.  After Carter named Paul Volcker as Fed chairman in August 1979, the Fed restricted the money supply and interest rates rose accordingly—the prime rate reaching an all-time high of 21.5% at the end of 1980.  All the while, Carter kept a tight grip on spending.  “Our priority now is to balance the budget,” he declared in March 1980.  “Through fiscal discipline today, we can free up resources tomorrow.”  

 

Unfortunately for Carter, austerity paid few political dividends.  As the economist Anthony S. Campagna has shown, Carter could not balance his low tolerance for Keynesian spending with other Democratic Party interests.  His administration took up fiscal responsibility, but his constituents wanted expanded social programs.  Meanwhile, his ambitious domestic agenda of industrial deregulation, energy conservation, and tax and welfare reform was hindered by his poor relationship with Congress.

 

The Carter administration might have shown more imagination in tackling these problems, but as many have noted, this was an “age of limits.”  Carter’s successors seem to have taken one major lesson from his failings: The American public may blame the president for a sluggish economy, but when it comes to debt, the sky’s the limit.

 

 

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174326 https://historynewsnetwork.org/article/174326 0
New Evidence from the Clinton Presidential Library on Bill Clinton and Helmut Kohl's Diplomatic Relationship

Clinton and Kohl meet in the Bach House, 14 May 1998 (Bachhaus.eisenach - Own work; CC BY-SA 3.0)

 

“I loved him,” said Bill Clinton in his eulogy during Helmut Kohl’s memorial in July 2017. “I loved this guy because his appetite went far beyond food, because he wanted to create a world in which no-one dominated, a world in which cooperation was better than conflict, in which diverse groups make better decisions than individual actors […]. The 21st century in Europe […] really began on his watch.”[1]

 

Indeed, Clinton’s and Kohl’s tenures connected the post-Cold War world with 21st century politics.[2]Both were united in their efforts to establish a new post-Cold War order and a lasting peace. They believed that there should be a strong European Union and an enlarged NATO so that Germany will be surround by NATO members in the East instead of being on the front line of Central European instability. Clinton and Kohl paid meticulous attention to Russia’s inclusion and the emergence of a special set of NATO-Russia partnerships.[3]They saw their endorsement of Russia’s President Boris Yeltsin as an essential investment in democracy and the establishment of market structures in Russia. Both thought that united Germany had to be part of NATO’s military out-of-area engagement in Bosnia, the first deployment of German forces outside of its own country since the end of World War II. The war in Bosnia proved that history and old conflicts had the potential to upset the emergence of a peaceful and prosperous Europe.

 

These issues were at the forefront of Clinton’s and Kohl’s meetings and telephone conversations. In December 2018, the Clinton Presidential Library released nearly 600 pages documenting their perception of international challenges and giving insights into the motives of their statecraft. Clinton and Kohl had trust in one another, and Clinton often called Kohl asking for advice on crucial topics.[4]

 

From today’s vantage point, the formation of Europe’s post-Cold war order looks easy. In fact, it was an enormous task and a constant challenge. In February 1995, for instance, Kohl told Clinton that “with respect to Russia, Central and Eastern Europe, NATO expansion, and the status of Ukraine, these are the essential points. No matter what we do with Moscow, if we fail in Ukraine (and the former Yugoslavia) we are lost. […] The situation in Europe is very vague and ambiguous.”[5]

 

During this critical phase, the Clinton-Kohl partnership provided leadership and vision. Since his days as a Rhodes scholar in Oxford in the late 1960s, Clinton had a keen interest in Europe and sympathy for Germany. During his first meeting with Kohl in March 1993, Clinton noted that “when he had been a student in England he had visited Germany as often as possible. At the time, he had been ‘almost conversational’ in German and still could understand a lot.”[6]

 

Kohl knew that Clinton expected united Germany to assume more international responsibility as a partner in NATO. In 1993, Kohl fought hard for the amendment of Germany’s constitution allowing for the country’s participation in NATO’s first intervention outside the member nations in Bosnia. “Today,” Kohl said at their March 1993 meeting, “good US-German relations are even more important than they had been thirty years ago when the division of Germany and the terrible fear of war had made things psychologically easier. People now have a different fear, and are asking whether their leaders can cope with new challenges or are drifting ‘like wood on the Potomac.’ This makes new German-American ties necessary.”[7]

 

Clinton and Kohl also spoke with one voice when it came to support for Russia’s President Boris Yeltsin. Both used personal diplomacies and positive feelings to interact effectively with Yeltsin despite frank disagreement such as the war in Chechenya, NATO enlargement and the Kosovo war. Both believed in their capacity to bring Yeltsin around on the NATO question doing all they could to allay Russia’s fears and its anxieties.[8]“In part,” as Clinton told Kohl in December 1994, “Yeltsin has a real concern. The Russians don't understand how everything will look 10-15 years from now.”[9]

 

Clinton’s and Kohl’s aim was to open up NATO, but slowly, cautiously and combined with an expanded effort to engage Russia. Indeed, they managed to keep NATO enlargement from harming Yeltsin’s reelection in 1996 while ensuring that NATO responded to Central and Eastern European desires to join the alliance. Both established a close personal rapport with Yeltsin and used countless meetings and telephone conversation to coordinate and synchronize their policies toward Russia. In September 1996, when Yeltsin announced his forthcoming open-heart surgery, Kohl called up Clinton providing a detailed report of his recent visit at Yeltsin’s dacha. “I think it is important that all of us be supportive of him during his surgery and that we do not create an impression of taking advantage of him during his convalescence,” Clinton said.[10]

 

In 1998, when Kohl’s tenure came to an end after 16 years as Chancellor, Clinton called him, praised his achievements and emphasized their enduring friendship: “Hillary and I think you are wonderful. I will always treasure our friendship and will be your friend forever. I am grateful to have worked with you and grateful that you always did the right thing […].”[11]Finally, in April 1999, Bill Clinton awarded Helmut Kohl the Presidential Medal of Freedom for his lifetime achievements and leadership. Their differences in age and style produced bondings rather than frictions. Both saw politics as a vehicle for major improvements in everyday life. Both sensed that Europe and the United States had to build the bridge to the 21st century – and they had to do it together. The Clinton-Kohl documents shed new light on their efforts to facilitate the emergence of an interdependent and transnational world based on freedom, peace, security and prosperity.

 

[1]Remarks by Bill Clinton, European Ceremony of Honor for Dr. Helmut Kohl, Strasbourg, 1 July 2017, see 

http://www.europarl.europa.eu/pdf/divers/eu-ceremony-of-honour-mr-kohl-20170701.pdf.

[2]See Bill Clinton, My Life (New York: Knopf, 2004); Helmut Kohl, Erinnerungen 1990–1994 (Munich: Droemer Knaur Verlag, 2007).

[3]See Strobe Talbott, The Russia Hand. A Memoir of Presidential Diplomacy (New York: Random House, 2002); James Goldgeier and Michael McFaul, Power and Purpose. U.S. Policy Toward Russia after the Cold War (Washington DC: Brookings Institution Press, 2003); James Goldgeier, “Bill and Boris. A Window Into a Most Important Post-Cold War Relationship,” in: Texas National Security Review 1:4 (August 2018), 43–54, see https://tnsr.org/wp-content/uploads/2018/08/TNSR-Vol-1-Iss-4_Goldgeier.pdf

[4]See https://clinton.presidentiallibraries.us/items/show/57651.

[5]Memcon Clinton and Kohl, 9 February 1995, see https://clinton.presidentiallibraries.us/items/show/57651, 198.

[6]Memcon Clinton and Kohl, 26 March 1993, see https://clinton.presidentiallibraries.us/items/show/57651, 16.

[7]Ibid, 15.

[8]See James Goldgeier, Not Whether But When. The U.S. Decision to Enlarge NATO (Washington DC: Brookings Institution Press, 1999); Ronald Asmus, Opening NATO’s Door. How the Alliance remade itself for a new Era (New York: Columbia University Press, 2002); Daniel Hamilton and Kristina Spohr (Eds), Open Door. NATO and Euro-Atlantic Security after the Cold War (Washington DC: Brookings Institution Press, 2019); Mary E. Sarotte, “How to Enlarge NATO. The Debate inside the Clinton Administration, 1993–95,” in: International Security 44:1 (Summer 2019), 7–41, see https://www.mitpressjournals.org/doi/pdf/10.1162/isec_a_00353

[9]Memcon Clinton and Kohl, 5 December 1994, see https://clinton.presidentiallibraries.us/items/show/57651, 166.

[10]Telcon Clinton and Kohl, 10 September 1996, see https://clinton.presidentiallibraries.us/items/show/57651, 364.

[11]Telcon Clinton and Kohl, 30 September 1998, see https://clinton.presidentiallibraries.us/items/show/57651, 581.

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174323 https://historynewsnetwork.org/article/174323 0
Why Trump Is Different than Reagan, Either Bush, Dole, McCain, or Romney—He’s Evil

 

If we look at Republican candidates for president over the last forty years, we find one significant difference between Donald Trump and his party’s predecessors. Despite all of his forerunners’ failings, it would be a mistake to label any of them as evil. Mistaken or misguided at times? Yes. But evil? No. Even progressive leftists should admit that occasionally, and sometimes more than occasionally, the six pre-Trump Republican candidates displayed moments of basic human decency. 

 

A few definitions of evil are “profoundly immoral and wicked” and “something that brings sorrow, trouble, or destruction.” Doesn’t that fit Trump? 

 

Several months ago, Michael Sean Winters, who “covers the nexus of religion and politics” for the National Catholic Reporter, wrote of “the seven deadly sins of Donald Trump.” One after another, the author ticks them off—greed, lust, gluttony, sloth, envy, wrath, and pride—and comments, “What we see with President Donald Trump and his cast of sycophants and co-conspirators . . . is a rare thing: All seven deadly sins on display at once.”

 

Winters observes that “greed has long been a motivating factor in Trump's life.” Since becoming president he has added greed and lust for power to his long-time pursuit of money and fame and his lust for women—the author just mentions in passing “that horrible tape,” where Trump (in 2005) stated he was able to  grab women “by the pussy.” And no mention is made of the some 23 women who since the 1980s have accused Trump of various types of sexual misbehavior including rape. “The evidence of gluttony is an extension of his greed and lust for power: He not only wants power, he can't get enough of it. Never enough money. Never enough women. Never enough wives. Like all gluttons, he leaves a mess in his wake.” Sloth? “As president, he famously can't be bothered reading his briefing papers,” and as of late October, 2019, President “Trump had 224 golf outings.” Regarding all his mentions and put-downs of President Obama, “it must be envy.” As for wrath, Winters predicted “we will see more and more wrath in the coming months.” And sure enough we did in early February, at the 68th annual National Prayer Breakfast (see below) and with the firing of two men who testified against him in impeachment hearings. Finally, we come to pride, “the deadliest of the seven deadly sins.” “What astounds, really, is that Trump's pride is the pride of the con man. He is proud of his ability to make people think he is a man of abilities when he really is a man of few gifts beyond those we associate with showmanship.”

 

In earlier HNN articles (see, e.g., this one of mid-2016), I have criticized Trump for his colossal egotism and lack of humility, a virtue that Winters identifies as pride’s opposite. Many others concerned with ethics have also commented on it, as conservative columnist David Brooks did in 2016 when he wrote that Trump’s “vast narcissism makes him a closed fortress. He doesn’t know what he doesn’t know and he’s uninterested in finding out. He insults the office Abraham Lincoln once occupied by running for it with less preparation than most of us would undertake to buy a sofa.”  

 

Brooks once taught a course at Yale on humility, and his most recent book is The Second Mountain: The Quest for a Moral Life (2019). Conservative Trump critics Michael Gerson and Peter Wehner also wrote a book on morality entitled City of Man: Religion and Politics in a New Era (2010).In February 2019, Gerson delivered a requested sermon in Washington’s National Cathedral. Wehner remains a Senior Fellow at the Ethics and Public Policy Center.

 

More recently Gerson wrote the essay “Trump’s politicization of the National Prayer Breakfast is unholy and immoral.” Trump used“a prayer meeting to attack and defame his enemies,” and “again displayed a remarkable ability to corrupt, distort and discredit every institution he touches,” Gerson observed. Now, after the Senate impeachment trial, Trump “is seized by rage and resentment,” and “feels unchecked and uncheckable.” Gerson also warned that Trump has “tremendous power,” and “we are reaching a very dangerous moment in our national life.”

 

About a week before Gerson’s article appeared, The Atlantic ran Peter Wehner’s much longer essay,“There Is No Christian Case for Trump.” Much of it deals with the impeachment charges against Trump and his wrongdoing regarding Ukraine, but Wehner also quotes favorably a December editorial by Mark Gali in “the evangelical world’s flagship publication, Christianity Today”: “[Trump] has dumbed down the idea of morality in his administration. He has hired and fired a number of people who are now convicted criminals. He himself has admitted to immoral actions in business and his relationship with women, about which he remains proud. His Twitter feed alone—with its habitual string of mischaracterizations, lies, and slanders—is a near perfect example of a human being who is morally lost and confused.”

 

Wehner also mentions other unethical Trump behavior— “authorizing hush-money payments to a porn star,” “misogyny,” “predatory sexual behavior, the “sexualization of his daughters,” and “his use of tabloids to humiliate his first wife, Ivana, when he was having an affair with Marla Maples.”

 

Columnist Ross Douthat is still one more conservative religious critic of Trump. Author of a critical study of Pope Francis, Douthat has had this to say about our president: he is a “debauched pagan in the White House,” and he is “clearly impaired, gravely deficient somewhere at the intersection of reason and judgment and conscience and self-control.”

 

Among writers who are less conservative than Brooks, Gerson, Wehner, and Douthat, comments about Trump’s evilness is even more widespread. To take just one example, we have Ed Simon, a HNN contributing editor. In an earlier article on Trump’s “religion,” I quoted Simon, “If the [Biblical] anti-Christ is supposed to be a manipulative, powerful, smooth-talking demagogue with the ability to sever people from their most deeply held beliefs who would be a better candidate than the seemingly indestructible Trump?”

 

All of the above comments indicating Trump’s evils do not exhaust the list, and just a few should be amplified upon or added. 1) He is a colossal liar. As The Washington Post stated, “Three years after taking the oath of office, President Trump has made more than 16,200 false or misleading claims.” 2) He lacks empathy and compassion. For example, in late 2015, he mocked a journalist's physical disability. 3) His boastful remarks about himself are examples of delusional pride—e.g., “in my great and unmatched wisdom,” and “I have a great relationship with the blacks." 4) Although it’s no easy job to identify Trump’s worst sin, his greatest may be what he is doing to our environment.

 

Any article dealing with Trump’s evil must contend with the overwhelming support he receives from Evangelicals. Why this is so and why they are wrong is dealt with by Wehner’s essay mentioned above. Also, although most evangelicals are conservative and support Trump, there are “progressive evangelicals,” who believe “the evangelical establishment’s embrace of Trumpism—unbridled capitalism, xenophobic nativism, and a willingness to engage with white supremacy—goes against everything Jesus stands for.” 

 

Finally we come to the question, “Does a president’s morals matter?” Did not John Kennedy and Bill Clinton engage in adulterous and/or inappropriate sexual behavior? Was the more upright Jimmy Carter a better president than these two? For all the “trickiness” of “Tricky Dick” Nixon and the shame of Cambodian bombing and Watergate, did he not pursue effective detente policies toward the USSR and Communist China?

 

The answer is presidential morals do matter, but only somewhat—though more than we realized until Trump demonstrated how costly their absence can be. Political wisdom, which itself requires certain virtues, is important, but so too are other skills like interpersonal and administrative ones.     

 

Some historians have written of the importance of presidential values and virtues. FDR biographer James MacGregor Burns maintains that “hierarchies of values . . . undergird the dynamics of [presidential] leadership,” and “considerations of purpose or value . . .lie beyond calculations of personal advancement.” Focusing on Presidents Lincoln, the two Roosevelts, and Lyndon Johnson, Doris Kearns Goodwin writes that the four presidents were “at their formidable best, when guided by a sense of moral purpose, they were able to channel their ambitions and summon their talents to enlarge the opportunities and lives of others.” Ronald Feinman has stated that “the most significant factor” in rating presidents’ greatness “is when they demonstrate moral courage on major issues that affect the long term future.” But he, as well as presidential historians Robert Dallek and Michael Beschloss, have commented on Trump’s lack of positive values and unfitness for office. 

 

Still, in the midst of primary season, much of the political talk is about various candidates proposed policies and whether they would be better or worse than what Trump has delivered and promises. “Medicare for all?” “Free public college tuition?” “More or less government regulation?” Etc. Etc. But we are missing the major point. Like Trump’s six previous Republican presidential candidates (and like many Trump supporters), all of the major 2020 Democratic candidates are decent human beings. Trump is evil. He is a liar, hatemonger and polarizer who has little knowledge of, or respect for, America’s traditions and better political values. Unable to tolerate criticism, he increasingly surrounds himself with flatterers and toadies. 

 

In Episode 8 of the second season of the HBO series “Succession,” Logan Roy’s brother says this about the media tycoon, “He's morally bankrupt. . . . In terms of the lives that will be lost by his whoring for the climate change deniers, there's a very persuasive argument to be made that he's worse than Hitler.” I thought not only about Rupert Murdoch, head of the media conglomerate that runs Fox News, but also of Trump. For many, now and in the future, the 2020 presidential election may not just be an ordinary U.S. election, but quite literally a matter of life or death.  

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174329 https://historynewsnetwork.org/article/174329 0
Remembering Soldier's Bravery at Iwo Jima, 75 Years Later

Though I was just thirteen when I decided to march on Japan, ride the wave of American retribution, and make the Japanese pay for the attack on Pearl Harbor, I had already passed from boy to man. I thought I knew it all. Though I had yet to see friends evaporate before my eyes, or an enemy bleed out and die by my own hand, I had loved and now I had hated and I considered myself more than ready to go to war.

 

That I would find both the need and the strength to pull a live hand grenade to my gut while a second grenade lay beneath me, ready to detonate, would have astonished me even in my moments of greatest bravado. I went to war with vengeance in my heart. I went to war to kill. Such is the irony of fate that I will be remembered for saving the lives of three men I barely knew.

 

My journey into manhood began one bleak October afternoon when my beloved father drew his final breath, having lost his long battle with cancer. I was eleven years old. Afterward, I pushed away any man the lovely widow, Margaret Lucas, attempted to bring into our lives. I did not need a man; I was one. I was a tough kid who loved to fight. I was rebellious by nature and had a hair-trigger temper. Troubled in general: that was the young Jack Lucas.

 

As my inner turmoil heated up so did world events, and with the reprehensible bombing of Pearl Harbor, we both boiled over. I lived by my wits and often made up my own rules, leaving a trail of broken jaws and busted lips as I went along. So, it was not much of a stretch on my part when I found a way to join the United States Marine Corps, though I was only fourteen at the time. I went AWOL to catch a train headed in the direction of the war. Then I stowed away on a ship to reach one of the Pacific’s worst battlefields. I figured, if I figured anything at all, that if I was shrewd enough to impose my will on the United States Marine Corps, the Japanese would give me little trouble.

 

Having already borne the weight of my life’s biggest loss, I was not afraid to face whatever awaited me on Red Beach One, Iwo Jima. I had no way of knowing that in a matter of a few short hours I would make the most important decision of my life and in the lives of three members of my fire team. The choice would be mine: either I could die alone or all of us would die together.

 

Excerpted from Indestructible: The Unforgettable Memoir of a Marine Hero at the Battle of Iwo Jima. Reprinted with permission. Copyright Harper Collins, 2020. 

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174325 https://historynewsnetwork.org/article/174325 0
An Interview with Historian Dr. David Dzurec of the University of Scranton

 

The University of Scranton is a private Catholic and Jesuit institution of 3,729 undergraduate students located in Scranton, Pennsylvania founded in 1888. The History Department engages with the public through its connectin to local history. The History Department “seek[s] to provide our students with an understanding of the significant institutions, events, trends and individuals that have shaped that experience, thus helping them to develop a better understanding of contemporary cultures and the human condition.” I spoke with Dr. David J. Dzurec, the History Department Chair at the University of Scranton, about the University’s involvement with the local area. He discussed the department's emphasis on service learning how they use local history inside the classroom.

 

Q: What is the value in local history?

A: Local History helps provide a context and an immediate importance for the work and the research that our students are conducting.  Even in a survey level class, integrating aspects of local history or connections to the region help bring national and world history events home to students.  Additionally, the larger Scranton community benefits greatly from know its own past.  Ideally, one of the jobs of the history department at the University of Scranton is to help bring these things (student work and the larger community) together.

 

Q: How are students engaging in the classroom with local history through service-learning?

A: At the University of Scranton, in addition to numerous internships, our history students have engaged in a variety of projects within the community and in coordination with local, state, and national institutions.  Our students have conducted research and helped organize projects with the Lackawanna County Historical Society (which is right on campus), the Weinberg Library’s Special Collections, the Steamtown National Historic Site, and the Pennsylvania State Library.  Students in our “Craft of the Historian” course worked with several of these institutions to digitize some of the Scranton family papers.  This project not only helped the students develop digitization and preservation skills, but it also allowed them to connect to the history of the local community (especially since the University’s campus is built in part on the Scranton family estate).  In another course, students studied the process of conducting oral history interviews and applied those skill by interviewing members of the Latinx community in Scranton.  These interviews were employed in research projects as part of our “Digital History” course.  Another ongoing project in our digital history class builds on student work researching the history of coal mining in the Scranton region.

 

Q: How is civil learning relevant to an understanding of history?

A: Understanding local history helps students develop a context for global and national historic events.  When students are able to connect moments like the “Square Deal” to the Lackawanna County Court House in downtown Scranton it adds a level of understanding they might not otherwise have had.

 

Q: Are there any specific documents that the University of Scranton possesses of historical significance either to the local area or beyond? If so, how do students and the public get to engage with these resources?

A: The University’s Special Collections include some of the Scranton family papers (part of which one of our classes worked to digitize), the Congressional Papers of Joseph McDade, Reports from the Pennsylvania Mining Commission, and the Passionist Historical Archives.  In addition to digitizing some of these special collections, one of our faculty members, Fr. Robert Carbonneau, works with students to make use of the Passionist Archives in their research and Fr. Carbonneau helped to organize a special exhibit of these documents. A number of our classes have conducted research in the various collections at the University and many of our students have completed internships working with Librarian Michael Knies in historic preservation.

 

Q: What do you see as Scranton’s broader role in history? How does it contribute to any other historical narratives?

A: Scranton’s long history of mining (specifically Anthracite Coal) and the wave of immigrants who came to the region to work in those mines, place the history of Scranton squarely in the center of the history of the United States in the early 20th Century.  Most notably the Anthracite Coal Strike of 1902, serves as a critical moment in the history of industrialization, labor history, and TR’s “Square Deal.”   

 

Q: How do you hope to expand and broaden the History Department’s public engagement?

A: Local history has been come a rich source of material for our students as they develop their research skills and learn about the variety of tools available to them in the research and writing process.  Going forward I hope to see our students continue to tell the stories of Scranton’s immigrant community, whether they arrived in the early 20th century or the early 21st century, in a variety of formats (from classroom presentation to digital projects).  I would also like to see us expand on the work we have already done to develop additional projects in conjunction with some of our local historic sites like the Lackawanna County Historical Society and the Steamtown National Historic Site.  I also think our students would benefit from the development of a new digital history lab, that would allow them to develop projects based on their local historical research that could be made widely available on the web.

 

 

 

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/173764 https://historynewsnetwork.org/article/173764 0
"We Can Always Learn More History:" An Interview with Historian Seth Center

 

Seth Center is senior fellow and director of the Brzezinski Institute’s Project on History and Strategy at the Center for Strategic and International Studies (CSIS). His scholarship employs a historical lens to examine the contemporary national security agenda, develop applied history findings to inform responses to future challenges, and connect diplomatic and military historians to the policy community (You can read more about Dr. Center from the CSIS website here).

 

When did you decide you were interested in history?

 

I was fortunate to have terrific undergraduate professors at Cornell University like Michael Kammen, Walter LaFeber, and Sherman Cochran. That experience led to graduate school at the University of Virginia, where I had equally terrific professors including Melvyn Leffler (my dissertation advisor), Brian Balogh, and Philip Zelikow. Mentorship is exceptionally important in helping aspiring historians become professional historians.

 

How did that initial interest ultimately lead you to become the senior fellow and director of the Brzezinski Institute’s Project on History and Strategy at the Center for Strategic and International Studies?

 

Serving in government allowed me to use historical methods for practical purposes like explaining the origin of a particular diplomatic problem or the evolution of a part of the US government. It’s not a traditional academic job, but it does use the same historical methods. Think tank historical work is similar--it’s about leavening policy dialogue that usually runs through presentist concerns with a little bit of historical perspective.

 

Prior to joining the CSIS, you served at the National Security Council (NSC) as the director for National Security Strategy and History and at the U.S. Department of State as a historian. How was historical research and analysis utilized to inform policy?

 

History is ubiquitous in making foreign policy, conducting diplomacy, explaining actions, and understanding partners’ approach to the world. Unfortunately, history can be as recent as yesterday in government. Because people are constantly moving and shifting jobs, why we are who we are and how we got here is often as distant as ancient history. A good historian can help an organization think about the costs and benefits of sustaining a current path or changing course.

 

History can help recapture the original reasons a decision was made and help to surface whether current assumptions are still valid. It can help policymakers think through alternative policy choices. History can provide a sense of proportion and scale. It can help answer the question of “are we confronting something new?”; “How important or significant is the event we are facing?”

 

History comes in two forms. First, comparison or analogy. History can provide similar episodes in the past to help understand or assess current conditions. Making a current event to a past event can help clarify what is novel and what is familiar, and then allow one to think about how to respond to a current situation with more precision, or at least better judgment. Second, history can illuminate the deeper roots of a specific situation or event. This type of history is particularly useful in helping to understand the evolution of a diplomatic relationship or to understand how a competitor is approaching a situation.

 

One of the goals stated on the CSIS website for the project on history and strategy is to forge the “connections needed for policymakers and historians to be more useful to each other.” Why is this relationship between policymakers and historians so important? What makes applying a historical lens, in your opinion, effective in informing policymakers?

 

Time and urgency are important dynamics in policymaking. History and historical reflection often takes time and history is produced without any particular regard for any urgent contemporary concern. The challenge for getting history to policymakers is to ensure it gets to the reader in a timely matter so they can think about its meaning and implication before they have to act. Often windows for analysis and action are tight, historians have to hit those windows to be effective.

 

When advocating for your research to those who do not already have an academic foundation of historical knowledge, what is the most difficult aspect of communicating the value of considering historical ideas or concepts?

 

Time is almost always the biggest barrier. Getting busy people to consider the past and take the time to read about what has come before is usually the initial barrier. The second challenge is the natural tendency to see issues as unique or without precedent, which discourages looking backwards. That tendency is amplified when the most common historical comparison might suggest a particular analysis or course of action could produce disaster.

 

Are there any unexpected or insufficiently discussed ways in which history is useful today?

 

We can always learn more history. In an ideal world, policymakers would possess or seek to understand the history of a particular problem before making a decision, and also consider what similar situations in the past might illuminate the challenges of a particular situation and help anticipate the best ways to move forward.

 

Taking on common and popular historical myths is always frustrating for historians. Changing basic interpretations of events or people once they have formed in the popular imagination is tough--that’s true for academic historians and historians in the policy world.

 

Is there a particular accomplishment or project that stands out since working at the CSIS? Why was this achievement valuable?

 

One interesting project we are working on is exploring why the Cold War has become such a prevalent analogy for understanding the current US-China relationship. We have asked historians to assess the many ways the analogy is being used in an effort to inform current policy debate with historical knowledge. An interesting and important question has emerged: if the dissimilarities outweigh the similarities, then should the analogy be used at all? History does not lend itself to basic mathematical formulas, which leaves a lot of room for interpretation. We are trying to make the comparison a little more precise by sharpening the distinctions between the past and present.

 

The HNN website states that “the past is the present and the future too.” What does that mean to you, and do you agree or disagree?

 

Basically, we confront very few truly novel challenges in the world. A deeper knowledge of history can help us focus on what those novel challenges are. For the rest of the problems, we should consider how we have responded as individuals, institutions, and nations so that we can anticipate future action, and, in an ideal world reduce risks and mistakes.

 

 

 

 

 

 

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/173705 https://historynewsnetwork.org/article/173705 0
Trump's Acquittal and the History of the Intentionally Undemocratic Senate

Years and decades from now, it’s not improbable that the January 31st scheduling confluence of both Great Britain’s official exit from the European Union and the Senate’s vote to dismiss witness testimony in the Donald Trump impeachment “trial” will mark that date as a significant nadir in trans-Atlantic democracy. A date that will live in infamy, if you will, or perhaps rather “perfidy” as Senate Minority Leader Chuck Schumer admirably put it. The Senate Republicans’ entirely craven, self-serving, undignified, and hypocritical vote to shield their president from any sort of examination is entirely unsurprising, though somehow still shocking. 

Senator Lamar Alexander’s cynical justification of his vote, whereby “there is no need for more evidence to prove something that has already been proven,” is in some manner the credo of the contemporary Republican Party. Spending the last three years bellowing “fake news” at anything which was disagreeable to them, Alexander’s honesty and commitment to reality is in a way refreshing. Alexander acknowledges that Trump is guilty – he just doesn’t care, so why waste time with an actual trial? What’s more surprising than the Republicans setting the process up for the inevitable “acquittal” this week is that so-called “moderates,” like Senator Susan Collins and Senator Mitt Romney, actually did the right thing. The better to trick some centrist Democrats into thinking that the GOP hadn’t completely lost its mind. 

Nobody of good conscience or sense could possibly think that the Republican role in the Senate impeachment proceedings was anything other than a successful attempt at cover-up, one with the ramification of letting Trump correctly know that he can do whatever he wants with absolutely no repercussions. The crossing of this particular Rubicon is by no means the only, or by far even the worst, democratic degradation over the past few years, but it’s certainly a notable one as Republicans from Senate Majority Mitch McConnell on down are not even bothering to hide their lack of ethics. From that perspective, as disturbing as Republican cravenness may be, it’s very much in keeping with the zeitgeist. Theorist Astra Taylor observes this in her excellent treatise Democracy May Not Exist, but We’ll Miss it When it’s Gone, when she notes that “recent studies reveal that democracy… has weakened worldwide over the last decade or so… It is eroded, undermined, attacked… allowed to wither.” Though the concept of “democracy” and the upper-chamber of the United States legislative branch are hardly synonymous with one another, it’s crucial now more than ever to keep in mind what’s intentionally undemocratic about the Senate as an institution. 

In the hours after the predictable Senate vote, reactions from centrist liberals to those further to the left seemed to anecdotally break down into two different broad, emerging consensuses. While most camps can, should, and must be united in trying to make sure that Trump only serves one term, the analysis of what the inevitable Senate acquittal means was wildly divergent. 

Among many centrist liberals there was a halcyon valorization of a Senate that never quite existed, a Pollyannaish pining for a past of process, decorum, and centrist sensibility. Such is the sentiment of Boston Globe editorialist Yvonne Abraham when he earnestly asked “What is there to say about this week’s shameful events in the U.S. Senate that doesn’t sound hopelessly naïve?” Eager to prove that they never enjoyed the West Wing, pundits further left emphasize, accurately, that the Senate itself is explicitly an institution predicated on the rejection of the popular will, or as one wag on Twitter put it, “watching the resistance libs get all hot and bothered about how fundamentally undemocratic the senate is would be balm for the soul, but they won’t learn a thing from this whole nonsense affair.” 

Except here’s the thing – both things can be true. The Senate can be an institution always predicated on unequal representation, and the Republican vote can still be a particularly shameful moment. What can and must be learned from the affair isn’t that resistance to Trump has to be fruitless, but rather that we can’t expect institutions and procedures to be that which saves us.  

Crunching the numbers is sobering if one really wants to know precisely how undemocratic the Senate actually is. An overwhelming majority of Representatives voted to impeach Trump in the far more democratic (and Democratic) House of Representatives, reflecting a January 20th CNN poll which found that 51% of Americans narrowly supported the president’s removal from office. Yet the Senate was able to easily kill even the possibility of such a result (even beyond the onerous 2/3rds requirement for conviction, which has historically made such an outcome a Constitutional impossibility). Ian Millhiser explains in Vox that “more than half of the US population lives in just nine states. That means that much of the nation is represented by only 18 senators. Less than half of the population controls about 82 percent of the Senate.” He goes onto explain that in the current Senate, the Republican “majority” represents fifteen million less people than the Democratic “minority.” 

Such an undemocratic institution is partially, like the Electoral College, a remnant of an era when small states and slave owning states were placated by compromises that would give them political power while the Constitution was being drafted. The origins of the institution are important to keep in mind, because even though population disparities between states like Wyoming and California would have been inconceivable to the men who drafted the Constitution, the resultant undemocratic conclusions are a difference of degree but not of kind. When the Senate overturns the will of the people, that’s not a bug but a feature of the document. The point of the Senate was precisely to squelch true democratic possibility – it’s just particularly obvious at this point. What’s crucial for all right-thinking people who stand in opposition to Trump is to remember that that’s precisely the purpose of the Senate, and that a complacent belief in the fundamental decency of institutions is dangerous. 

So valorized is the Constitution in American society, a central text alongside the far-more-radical Declaration of Independence in defining our covenantal-nationality, that there can be something that almost seems subversive in pointing out its obviously undemocratic features. Yet the purpose of the Constitutional Convention was in large part to disrupt the popular radicalism of the Articles of Confederation that structured governance from the Revolution until Constitutional ratification. While there may be truth in the fact that the Constitution was necessary to forge a nation capable of defending and supporting itself, the Articles were a period of genuine democratic hope, when radical and egalitarian social and economic arrangements were possible in at least some states. Literary scholar Cathy Davidson argues in Revolution and the Word: The Rise of the Novel in America, that far from enacting some kind of democratic virtue, ratification signified an eclipse of radical possibility, noting “the repressive years after the adoption of the Constitution.” For Davidson, the much-valorized drafters in Philadelphia met to tamper down the democratic enthusiasms of the Articles, concerned as they were about the “limits of liberty and the role of authority in a republic.” 

The Constitutional Convention is thus understood more properly as a type of democratic collapse, like Restoration after the seventeenth-century English Revolution, or the end of Reconstruction following the American Civil War. Historian Woody Holton writes in Unruly Americans and the Origins of the Constitution that though “Today politicians as well as judges profess an almost religious reverence for the Framers’ original intent,” reading Federalist arguments from the eighteenth-century indicates that the purpose of the Constitution was “to put the democratic genie back in the bottle.” Such was the position of one Connecticut newspaper which in 1786 argued that state assemblies paid “too great an attention to popular notions,” or of the future Secretary of the Treasury Alexander Hamilton, recently transformed by a Broadway musical into a hero of neoliberal meritocratic striving, who complained that he was “tired of an excess of democracy.” Only two generations later, and partisans of democratic reform understood all too clearly that the Constitution was a profoundly compromised document, with abolitionist William Lloyd Garrison describing it as “a covenant with death, an agreement with hell.” Historian Gordon Wood famously argued that the “American Revolution was not conservative at all; on the contrary: it was as radical and as revolutionary as any in history,” and that may very well be true. But what also seems unassailable is that the Constitution was in some manner a betrayal of that radicalism.

If Garrison, and the young Frederick Douglass, were in agreement with other radicals that the Constitution was reactionary, then progressives would come to embrace the document because of an ingenious bit of rhetorical redefinition born of necessity during the Civil War. President Abraham Lincoln is the most revolutionary of Constitutional exegetes, because he entirely reframed what the document meant through the prism of the democratic Declaration of Independence. Garry Wills in the magisterial The Words that Remade America argued that at Gettysburg, “Lincoln was here to clear the infected atmosphere of American history itself, tainted with official sins and inherited guilt. He would cleanse the Constitution… [by altering] the document from within, by appeal from its letter to the spirit, subtly changing the recalcitrant stuff of that legal compromise, bringing it to its own indictment.” Calling it among the “most daring acts of open-air sleight of hand ever witnessed by the unsuspecting,” Wills argues that “Lincoln had revolutionized the Revolution, giving people a new past to live with that would change their future indefinitely.” By providing a jeremiad for a mythic Constitution that never was, Lincoln accomplished the necessary task of both imparting to it a radical potential which didn’t exist within its actual words, while suturing the nation together. 

This was a double-edged sword. On the one hand, by having rhetorical and legal recourse to Constitutionality much progress has been made throughout American history. But there is also the risk of deluding oneself into thinking that the Constitution is actually a democratic document, which goes a long way to explaining the frustration among many when the Senate acts as they should expect it to. When we assume that procedure is salvation, heartbreak will be our inevitable result. The question for those of us on the left is how do we circumnavigate the extra-democratic aspects of the Constitution while living within a Constitutional republic? We could attempt another rhetorical “cleansing” of the Constitution in the manner of Lincoln, a reaffirmation of its spirit beyond its laws – and much could be recommended in that manner. 

I wonder, however, if disallowing ourselves of some of our illusions might be preferable, and that there might be something to recommend in embracing a type of “leftist devolution,” a commitment to a type of small scale, regional, and local politics and solidarity that we often ignore in favor of the drama of national affairs. Too often we’re singularly focused on the pseudo-salvation of national politics, forgetting that democracy is far-larger than the Constitution, and has more to do than just with what happens in Washington. Tayler writes that “Distance tends to give an advantage to antidemocratic forces… because people cannot readily reach the individuals in power or the institutions that wield it,” explaining that “Scale is best understood as a strategy, a means to achieve democratic ends,” for “Democracy begins where you live.” All politics must be local, something that the right has understood for generations (which is, in addition to inequities established in their favor, part of why they’re so successful right now). Democracy, and agitation for it, happens not just in the Senate, but in state-houses, on school boards, on city councils, in workplaces. It must happen everywhere.

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174255 https://historynewsnetwork.org/article/174255 0
Was Nixon Really Better Than Trump?

 

For those who think the Trump/Nixon comparisons are overdone or exaggerated, or that Nixon was a better person or president than Trump, think again. Both men were cut from the same bolt of cloth. They share common character defects: hyper-insecurity, overwhelming feelings of victimhood and unfair vilification, hatred of losing, authoritarian and bullying complexes, and loathing of enemies.

 

We recently witnessed another shared attribute: when things go well, they turn ugly. Neither man seems capable of enjoying or appreciating moments of vindication or success. Instead, they became laser-focused on revenge against those who they believe dishonestly attacked them. At last on the mountaintop, their instinct was to start throwing rocks and boulders at those below.

 

The difference between Trump and Nixon is that Trump acts out in public; Nixon did it behind the scenes as secret microphones picked up his every word. Ominously for the nation, there is another important difference. Nixon sought revenge at the beginning of his second term but was effectively stopped from exercising his worst impulses when the Watergate scandal began to cripple his presidency. In this respect, Trump is Nixon in reverse. He has survived his impeachable scandal and now has free rein to act out on his instincts for vengeance.

 

Trump took to the warpath the day after winning acquittal in the Senate impeachment trial. At the National Prayer Breakfast the following morning, Trump held up two newspapers with the headline “ACQUITTED” as he entered the room and then he mocked House Speaker Nancy Pelosi and her faith, saying, “I don’t like people who say, ‘I pray for you’ when they know that that’s not so.” She sat just feet away on the dais. Trump called his impeachment “a terrible ordeal by some very dishonest and corrupt people.”

 

That was only the start. Later in the day, the president “celebrated” his acquittal in the East Room of the White House, surrounded by Republican members of Congress and Fox News hosts in an hour-long profanity-laced pep rally. “We went through hell unfairly, did nothing wrong,” he told an animated audience. He called the Russia investigation “bullsh*t.” He reiterated his theme that what he had gone through in his first three years in office has been “evil.” He said: “It was corrupt. It was dirty cops. It was leakers and liars. And it should never ever happen to another president.”

 

And as might be expected the knives have come out. Lieutenant Colonel Vindman and his brother and Gordon Sondland, impeachment witnesses and a bystander, have been summarily dismissed.

 

Nixon’s tapes show an embattled president acting the same way, only behind closed doors.

 

On the night of January 23, 1973, President Nixon announced on national radio and television that an agreement had been reached in Paris to end the war in Vietnam. This was the culmination of his most important pledge as a presidential candidate in 1968 and 1972. Nixon swore he’d bring “peace with honor,” and finally in the first days of his second term he could proclaim its fulfillment. Nothing was more important to this man who was raised by a devout Quaker mother. In his first inaugural, in fact, Nixon declared that “the greatest honor history can bestow is the title of peacemaker.” This is the epitaph that graces his headstone at his presidential library and childhood homestead in Yorba Linda, California.

 

Yet on the day of this supreme breakthrough, Nixon was almost desperate to get even with his perceived enemies. His meetings and calls with presidential assistant (and later evangelical Christian) Chuck Colson are astonishing and vulgar.

 

Once it was clear the peace accord had been initialed, Nixon met with Colson and begged him to marshal anyone and everyone in the administration to start hitting back at those who had opposed him. Nixon had taken a huge gamble following his re-election in November 1972. The North Vietnamese who had been negotiating with Henry Kissinger in Paris became recalcitrant. Though Nixon scored a landslide win, he had no coattails and Democrats gained seats in the Senate (one being Joe Biden of Delaware). The North Vietnamese therefore dug in, recognizing that a Democratic Congress (the House, too, remained in Democratic hands) would likely pull financial support for the war when they reconvened in January 1973. All they had to do was wait things out.

 

Nixon, Kissinger and General Alexander Haig met in mid-December to discuss what to do. As Kissinger says on the tape of the meeting, it was time to “bomb the beJesus” out of the North, including population centers like Hanoi. The infamous Christmas bombing began on December 18. Nixon consulted no one in Congress. The world shrieked, thinking the American president had gone mad.

 

But the brutality worked. The North Vietnamese came back to the table and a peace accord was hashed out almost in time for Nixon’s second inaugural. It should have been a moment of gratitude and elation; it was anything but.

 

“This proves the president was right,” Nixon says to Colson. “We’ve got go after our enemies with savage brutality.” At another point, Nixon froths, “We’ve got to kick them in the b*lls.” He encouraged Colson to sue Time magazine for libel over a Watergate story and called his press secretary, Ron Ziegler, to order the blackballing of “the bastards atTime magazine”for the rest of his presidency. 

 

Scores were to be settled; nothing was to be held back.

 

Later that night, after calmly telling the nation of the peace agreement, Nixon was back on the phone with Colson working himself into a lather. They chortled over how correspondents at CBS were “green, hateful, disgusted, and discouraged” at the news. “I just hope to Christ,” Nixon spewed, “some of our people, the bomb-throwers, are out, because this ought to get them off their ass, Chuck.” He wanted his team energized to work that very night to start the process of retaliation. “Those people who wanted me to have an era of good feeling,” he snarled, “if they bring that memo to me again, I’m going to flush it down the john.”

 

Nixon was unleashed. But the Senate voted a little over two weeks later to start the Ervin Committee to investigate irregularities in the 1972 election. Nixon became embroiled in the “drip, drip, drip” of bombshell disclosures and revelations and he never regain his footing. Over time, he became powerless to get even with his enemies, resigning in the summer of 1974 as he faced impeachment.

 

Trump is on the other side of his scandal, and he not only survived, he is virtually “locked and loaded.” He has nothing to rein him in absent a defeat in November. With no other check in sight, the country will now see just how far President Trump will go in settling old scores. If last week is the prelude, it’s going to get even uglier. Imagine if he wins re-election?

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174296 https://historynewsnetwork.org/article/174296 0
Re-Animating the 1619 Project: Teachable Moments Not Turf Wars

 

 

Who wins when distinguished historians, all white, pick fights over the history of slavery with prominent New York Times journalists, all black, who developed the newspaper’s 1619 Project? Beginning last year, a stream of well known scholars have been objecting publicly to the journalists’ contention that slavery, white racism and African American resistance so fundamentally define the American past that it acts as our history’s (so to speak) prime mover. The historian/ critics’ basic reply: Wrong. It’s more complicated!

 

One of these scholars, Sean Wilentz, quite recently detailed his objections in the January 20, 2020 issue of The Atlantic. At about the same time a dozen additional dissidents published their critiques (to which the 1619 Editors responded) in the History News Network. Meanwhile, out-front bigots like Newt Gingrich, Tucker Carlson, Michael Savage and Rush Limbaugh hijacked the controversy. They have captured the headlines, dominated the news cycle and-- as they would have it-- taken home the trophy and delivered it to Donald Trump. The New York Times journalists have emerged with collateral damage and the historians as unmitigated losers in the court of public opinion.  Lost as well was a rare opportunity for a substantial evaluation of slavery’s role in shaping our shared American experience. 

 

But here’s what’s most important. Those of us who value the 1619 Project can reclaim our “teachable moment” by excavating beneath the heated rhetoric. There we will discover that the journalists and the historians embrace conflicting but equally valuable historical truths regarding slavery’s power to shape our nations past and present. I will soon articulate why this is so and what we can learn as a result.

 

First, however, we must move beyond the conflict that erupted when Wilentz, joined by James M. McPherson, Gordon Wood, James Oakes, and Victoria Bynum, eminent scholars all, forgot that they also have an obligation to serve us as educators, not as censors. By so harshly attacking credibility of the 1619 Project in their letter to The New York Times, they squandered the “teachable moment” that the Project itself intended to create. Instead, these scholars appointed themselves gatekeepers charged with the heavy enforcement of their personal versions of high academic “standards." 

 

Instead of constructively dissenting and inviting dialogue, they berated the 1619 journalists for pushing “politically correct” distortions grounded in Afro-centric bias. “The displacement of historical understanding by ideology” is how one of them phrased it. They demanded retractions, worked assiduously (and failed) to recruit scholars of color to their cause, and sent their complaints directly to the top three editors of the Times and its Publisher A.G Sulzberger. That looks a lot like bullying. Dialogue dies when one contending party publicly attempts to undercut the other with his/her bosses.

 

The historians, however, were not alone when criticizing the 1619 Project. Newt Gingrich proclaimed that “The NYT 1619 Project should make its slogan ‘All the Propaganda we want to brainwash you with.’”  Ted Cruz fulminated that “There was a time when journalists covered ‘news.’ The NYT has given up on even pretending anymore. Today, they are Pravda, a propaganda outlet by liberals, for liberals.” The Trumpite commentators who had been attacking the 1619 Project since last August seized the distinguished historians’ arguments and repeated them on FOX News. Eric Erickson’s reactionary website, The Resurgent, freely appropriated them (without acknowledgement). Though the Times has defended the Project’s integrity while other media outlets have highlighted the general controversy and The Atlantic has published Wilentz’s academic critique, the Trumpistas have high-jacked the conversation. 

 

So thanks to the triumph of Team Bigotry we have yet to discover what the historians, the journalists and the 1619 Project more generally can actually teach us. But we can make a strong start by reflecting on the contrasting points of view signaled by these titles:  John Hope Franklin’s From Slavery to Freedom and August Meier’s and Elliot Rudwick’s From Plantation to Ghetto. Back in the 1960s, when African American history was first establishing itself as a mainstream field of study, these two dominated the textbook market. Together they presented sharply differing alternatives for teaching about slavery and its legacies. Each is as influential today as it was back then. 

 

Pick From Slavery to Freedom and you develop a history course around a text that foregrounds sustained activism that produced significant change, presumably for the better. Select Plantation to Ghetto and you prepare students for a sobering overview of racist continuity that has persisted across the centuries despite all the struggles against it. Martin Luther King perfectly captured the spirit of Franklin’s text when affirming that “The arc of history bends toward freedom.” Amiri Baraka (Leroi Jones) did exactly that for Meier’s and Rudwick’s text when lamenting the “ever changing same” of African American history. 

 

Guided by both King and Baraka we hit bedrock. Their conflicting insights, partial though they are, carry equal measures of truth. Heeding King and the historian/critics who share his perspectives, let’s inquire: Was African American history replete with life-changing moments, epochal struggles, liberating ideologies, unexpected leaps ahead, and daring cross racial collaborations? History’s reply is “of course.”  Heeding the 1619 journalists who share Baraka’s perspective, let’s ask: Was African-American history determined by a white racism so intense that it repeatedly crushed aspirations, inflicted terrible violence, undercut democratic values, made victories pyrrhic and pulled leaps ahead rapidly backwards? The answer again is “of course.” By making these two affirmations we have finally identified a central question that can reanimate our “teachable moment. 

 

Imagine the Times journalists and an open-minded group of historian-critics debating this question:  “To What Degree has the Arc of African American History Bent toward Freedom?” Also imagine them pursuing this discussion in a spirit of informed collegiality in, say, a nationally televised PBS forum. And since we are in the business of reanimating, let’s finally imagine that we have summoned the ideal Moderator/ Commentator, a journalist who made history, and who did more than any other survivor from slavery to educate Americans about the depth of white racism and black resistance. Frederick Douglass. Can you imagine a more “teachable moment?"

 

This scene, fanciful as it is, amply suggests what should have taken place had the historian-critics chosen to be teachers, not gatekeepers. It also provokes us to realize that we can searchingly interrogate our nation’s complex chronicle of racial injustice while acknowledging the areas in which we have made palpable progress. Opportunity still awaits us to snatch back the trophy from Team Bigotry and push back together, journalists and historians alike, against our own rising tide of white supremacy. 

 

 

Editor's note: The History News Network is attempting to create a forum to discuss how historians can invite the public to learn and reflect on the pain and paradoxes of African American history and how journalists and historians might learn from and collaborate with one another in addressing this history. We hope to update readers soon. 

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174256 https://historynewsnetwork.org/article/174256 0
The Travesty of the Century

 

After three years in office, one can hardly be surprised at what Trump is capable of saying, doing, or scheming. In the middle of his impeachment trial, Trump finally released his “deal of the century”—a deal that completely ignored several United Nations resolutions, accords that were sponsored by the European community and the United States, and bilateral agreements between Israel and the Palestinians. Trump assigned his ‘internationally recognized top expert on Middle Eastern affairs’, Jared Kushner, to come up with a deal to solve a seven decades-old conflict that has eluded every American administration since 1948. Whereas the US has over the years played a central role in the effort to solve the Israeli-Palestinian conflict, no American administration has provided such a detailed proposal, certainly not one that grants Israel all of its wish list. Every past administration knew full well that prejudicing the outcome would doom any prospective deal from the start, and thus settled on providing a general outline consistent with prior internationally recognized agreements. When it comes to Trump though, prior interim agreements, UN resolutions, and numerous face-to-face negotiations between both sides simply do not matter. Instead, he relies on his “negotiating skills” and crude audacity to offer a solution that no one who has any deep knowledge of the history of the conflict, its intricacies, and its psychological and pragmatic dimensions would even contemplate. It is important to note however, that both Israel and the Palestinians have over the years denied each other’s right to exist in an independent state, and to suggest that one side or the other is innocent and wholly wronged is a fallacy. Both have contributed to the impasse and both are guilty for failing to adhere to the numerous agreements sponsored by the international community, to which they initially subscribed. The following offers a synopsis of these resolutions and agreements. On November 29, 1947, the United Nations General Assembly passed Resolution 181, stating that “independent Arab and Jewish States…shall come into existence in Palestine…not later than 1 October 1948.” On November 22, 1967 the UNSC passed Resolution 242 “Emphasizing … respect for and acknowledgement of the sovereignty, territorial integrity and political independence of every State in the area…to live in peace within secure and recognized boundaries…” On October 22, 1973 the United Nations Security Council Resolution 338 “Calls upon the parties concerned to start immediately after the cease-fire the implementation of Security Council resolution 242 (1967) in all of its parts…” On September 17, 1978 the Camp David Accords declared “the agreed basis for a peaceful settlement is…United Nations Security Council Resolution 242, in all its parts”. On September 13, 1993 the Oslo Accords aimed to establish principles of self-government, “leading to a permanent settlement based on Security Council resolutions 242 (1967) and 338 (1973).” On March 28, 2002 the Arab Peace Initiative, which was unanimously endorsed by the Arab League and the international community, including a majority of Israelis, “[called] for…Israel’s acceptance of an independent Palestinian state with East Jerusalem as its capital…” On April 30, 2003 the Quartet’s (US, EU, UN, and Russia) Road Map for Peace insists “a settlement…will result in the emergence of a…Palestinian state living side by side in peace and security with Israel and its other neighbors.” Trump however, in his wisdom, chose to completely ignore these prior resolutions and instead focused primarily on what he considers ‘best for Israel’, albeit the deal will do more harm to Israel than he could possibly imagine. I dare say that he may well understand the dire implications for Israel, but cares less as long as it serves his interests. Trump is known for violating international accords; he withdrew from the Paris Accord on climate change, the Iran deal (JCPOA), and trade agreements with China, Canada, and Mexico, and domestically revoked scores of regulations enacted by the Obama administration. To be sure, he wants to put his own mark on everything, whether he agrees or disagrees with the subject matter. This raises the question by what logic Trump can assume upon himself the political, religious, and moral right to divide an occupied land between Israel and the Palestinians in defiance of all previous accords and internationally recognized agreements? Whereas he consulted with the Israelis ad nauseum on every provision of his Deal, he completely ignored the Palestinians. Notwithstanding the fact that the Palestinians severed direct talks with the US as a result of Trump’s recognition of Jerusalem as Israel’s capital, at a minimum he should have initiated back-channel contacts with Palestinian leaders and considered their requirements that could ensure some receptivity rather than outright rejection. Moreover, for Trump to unveil his grandiose Deal standing side-by-side Netanyahu sent an unambiguous message as to where he really stands and to whom he is appealing. This scene alone was enough to disgust even moderate Palestinians, who otherwise would have at least paid lip service to the Deal. But that was not on Trump’s agenda. On the contrary, he did so deliberately for his targeted audience—and in that, he succeeded. Like everything else, whatever Trump touches dies, and if there was any hope for an Israeli peace it has now been deferred for years if not decades. The Israelis will waste no time to act on all the provisions provided by the Deal. Ganz, the leader of the Blue and White Party, has already stated that if he formed the new Israeli government, he would annex all settlements and the Jordan Valley. For Ganz, just like for Netanyahu, American political support is what matters, irrespective of any other internationally recognized accords that have granted the Palestinians the right to establish an independent state of their own. To be sure, Trump’s peace plan should be renamed “the travesty of the century” for which Israelis and Palestinians will pay with their blood.

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174314 https://historynewsnetwork.org/article/174314 0
The Dramatic Relationship Between Black America and the Academy Awards

 

After weeks of hype, the Oscars air this evening. The announcement of the nominations last month reignited discussions of race and the Oscars. The number of nominees of color in the main categories was almost nil. With the exception of the nomination of Black British singer/actress Cynthia Erivo for her splendid performance in the Kasi Lemmons film, Harriet, no other non-White person scored a nomination. Anticipation that the mesmerizing Latina actress Jennifer Lopez, deeply engaging Asian actress Akwafina, hilarious legendary comedian Eddie Murphy, and other performers of color would be acknowledged by the Academy for their performances were quickly dashed on January 13th. In fact, it was a very white, very male roster of honorees.

 

As has been the case over the past few years, the roster of acting and directing nominees were intensely dissected and debated. Multiple Oscar nominated actor Joaquin Phoenix spoke truth to power at the BAFTA awards when he directly and passionately decried and criticized many of his peers, as well as himself, for the pitiful dearth of actors of color being acknowledged for their performances. Mega influential fiction writer, Steven King found himself in the middle of an intense debate when he stated that talent is the sole criteria that should be considered when judging art and that race, gender etc. should not be considered. 

 

After a torrent of fierce criticism directed toward him, the much beloved and admired author backtracked and retreated from his initial comments and conceded that race is indeed a crucial factor in Hollywood, as it is in virtually every avenue of American life. His initial remarks demonstrated a degree of tone deafness and quite frankly, arrogance among a person who prides himself on being politically, socially and culturally progressive. While his comments may have been fuel for commentary among the mainstream media, they were hardly surprising among many, if not most, Black Americans. Indeed, King's comments epitomized and validated a long running historical pattern of White liberal arrogance. The white liberal who believes his or herself to be so socially and culturally adept that they eventually arrive at the conclusion that any comments they espouse are valid or worthy of praise. Mr. King got a rabid dose of reality. Whether he was genuine in his sudden contrition not, the fact is King’s “sudden epiphany” was right on target. This has particularly been the case as it relates to Black America and the academy.

 

In July 2013, American film marketing executive Cheryl Boone Isaacs was named president of the Academy of Motion Pictures of Arts and Sciences. (AMPAS). She was the first black American (and only third woman after actresses Bette Davis and Fay Kanin) to be selected to head such a prestigious organization. She held this position until 2017. Several decades earlier in 1939, Hattie McDaniel became the first black person to win an Oscar for her performance in the classic movie Gone With the Wind. Her victory was a bittersweet one in the fact that her speech was prepared for her and she and her guests were forced to set in a segregated section of the building where the event took place. These two similar yet distinctive examples are representative of the complex relationship between black Americans and the Oscars.

 

From its origin as a small dinner party of a few actors and actresses, movie executives and producers in 1927, the Academy Awards soon moved to ceremony status in 1929. Along with fellow ceremonies such as the Miss America Pageant, the Super Bowl, the Grammys, etc., the Oscars has remained one of the most watched annual events by Americans as well as viewers throughout the entire world. 

 

Throughout its existence, the African American community has had an ambiguous relationship with the Academy Awards. While the selections of McDaniel and Boone-Isaacs as best supporting actress and Academy president were significant by any standard and well applauded, the history of blacks and the Academy has been somewhat complex. After Hattie McDaniel won her Oscar, it was not until a decade later in 1949 that another black actress, Ethel Waters, was nominated for best supporting actress for her performance in the movie Pinky. Unlike McDaniel, Waters was unsuccessful in her quest to win an Academy Award. She lost to fellow Pinky co-star Jeanne Crain.

 

In 1954, beautiful Dorothy Dandridge was the first Black woman nominated for best actress in a lead role for her performance in Carmen Jones. The Oscar that year went to Grace Kelly for her performance in The Country Girl. It was not until 1958 that a black male, Sidney Poitier, was nominated for an Oscar for his convict role in The Defiant Ones. Poitier was also nominated in 1963 and became the first black man to win best actor as well as first black person to win an Oscar for best actor for his lead role as Homer Smith, a mercenary carpenter who serves as a handyman assisting a group of international nuns in the Arizona desert in Lillies of the Field.

 

While his performance was a good one, there is no doubt that the broader cultural and political landscape influenced Poitier’s win. Martin Luther King Jr.’s delivered his iconic “I Have a Dream” speech at the steps of the Lincoln memorial several months earlier during the summer of 1963 and newly elected president Lyndon Johnson was planning to sign the 1964 Civil Rights Act into law a few months later in July. These events likely influenced a number of Oscar voters. Legendary mid 20th century gossip columnist Hedda Hooper, who was also known to be racist and anti-Semitic, was also an Anglophobe who detested the number of British actors who were nominated that year. As a result, she aggressively campaigned for Poitier. Talk about politics of the surreal. Such behavior gave credence to the saying that politics makes strange bedfellows! Thus, the political, social and cultural climate had aligned with the stars to work in the actor’s favor. This also demonstrated credence to what the late, high society author and cultural critic Truman Capote stated - that politics and sentiment are major factors as it relates to the Oscars.

 

Milestones aside, Poitier’s triumph was short lived as Black performers received scant and sporadic nominations throughout the next decade following his historic win. James Earl Jones, Diana Ross, Cicely Tyson and Diahann Carroll were among those who received the academy’s blessing during the early 1970s. Then, from 1975–1981, not a single Black performer was acknowledged with a nomination. Howard Rollins’ nomination in Ragtime in 1981 disrupted the long running drought. The following year,. Louis Gossett Jr. won an award in 1982 for his role as a tight and tough as nails drill sergeant in the movie An Officer and a Gentlemen. Frustrated by the continuing dearth of representation among Black performers gracing movie screens, in the early 1980s, the Hollywood branch of the NAACP criticized the Academy for what it saw as a chronic lack of black nominees. 

 

During the mid 1980s, The Color Purple, a film directed by Stephen Spielberg based on the 1983 Pulitzer Prize-winning novel by Alice Walker was nominated for 11 Academy Awards. The film became a lightning rod of controversy and set off a number of heated and passionate debates in the black community, particularly for its less than stellar depiction of black men. In fact, even conservative publications such as the National Review, the vanguard of American conservatism at the time, denounced the film stating that there were not any black men in the movie that had any admirable or redeeming qualities.

 

The Color Purple marked the first time that multiple black actresses received nominations for the same film; Oprah Winfrey, Margaret Avery and Whoopi Goldberg all received nominations. Despite its multiple nominations and excessive amount of attention it garnered, the movie failed to win any Oscars and tied the record for the most nominated film to not win any Oscars with the 1977 movie, The Turning Point. What made this controversy even more interesting (arguably amusing) was the fact that many of the same people, including the Hollywood branch of NAACP, who were critical of the movie “threatened to sue” the Academy for failing to award any Oscars to the movie. There is no doubt that such a suit would have been unsuccessful—how, for example, would anyone know which of the 4800 members voted for the for or against the movie?

 

Eventually, the Hollywood NAACP came to its better senses but its initial reaction was not the finest moment for the civil rights organization.

 

During the 1988 Oscar ceremony, Hollywood mega superstar Eddie Murphy took the Academy to task decrying what he perceived as the lack of sufficient recognition given to black performers in the movie industry before giving the award for best picture that year.

 

During the 1990s, black actors were increasingly nominated for their performances by the Academy, including Denzel Washington, Angela Bassett, Whoopi Goldberg, Laurence Fishburne, Morgan Freeman and others. Goldberg became the second black woman to win an Oscar in 1990 for her role as the psychic medium in Ghost.

 

By the 21st century, black nominees have been a regular staple in the Oscar circuit. In 2001, a year dubbed by a number of blacks and (and non-blacks) as “the year of the black Oscars,” the Academy honored Sidney Poitier with a lifetime achievement award. Denzel Washington, Will Smith and Halle Berry were also nominated for best actress/actor. Berry and Washington were victorious. For Washington, it was his second academy award. Berry would become the first and currently only black woman to win best actress.

 

In 2004, Jamie Foxx became the first black nominee to receive two nominations in the same year. He won the Academy Award for his spellbinding performance in the movie Ray. Forrest Whitaker, Morgan Freeman, Jennifer Hudson, Octavia Spencer, Monique, Viola Davis and Regina King have also taken home Hollywood’s most coveted honor. 

 

 A number of black nominees including Chiwetel Ejiofor, Lupita Nyongo, Barkhad Abdi have been nominated in the best actor and supporting actor and actress categories. In 2013, black British director Steve McQueen became the first Black person to receive the best film honor for his film 12 Years a Slave and Lupita Nyong'o won best supporting actress for her role in the film. In the weeks leading up to the award ceremony, Fox Searchlight Pictures aggressively showcased posters of the film displaying the statement “It’s time.” There was nothing ambiguous about the message.

 

McQueen’s triumph aside, there have only four black directors nominated: McQueen, the late John Singleton (1991’s “Boyz n the Hood”), Lee Daniels (2009’s “Precious”), and Barry Jenkins (“Moonlight”). Moonlight won best picture in 2016 allowing Jenkins to join McQueen as the second Black person to win an Oscar for best picture. A Black director did receive an Oscar in the past year, but it was an honorary one. Never one to shy away from controversy, legendary director Spike Lee used his acceptance speech in 2019 to forcefully, directly and candidly remind the industry that the world is changing, and will soon be minority white. “All the people out there,” he said, “who are in positions of hiring: You better get smart.”

 

Whatever your opinion, in spite of its historically reductive and often complex history as it relates to race, the truth is that the Oscars have been a mainstay in American popular culture and have been influential in the lives of a number of black entertainers and I, like millions of people all over the world, will likely be tuning in on February 9th to see who will take home an Academy Award.

 

Editor's note: This piece was updated to note that Lupita Nyong'o won for best supporting actress in 2013. 

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174245 https://historynewsnetwork.org/article/174245 0
Should the Democratic Presidential Candidates Announce Their VP Picks Now?

Some pundits have recently suggested that presidential candidates should identify their running mates before the caucus and primary season gets under way. Ted Rall wrote on January 24, 2020 that the United States should require presidential candidates to “announce their veep picks at the same time they announce their intent to run.”  Rall believes such a requirement would be more democratic and provide primary voters useful information about the possible successor.  The same day, Matt Bai made the narrower suggestion in the Washington Post that older presidential candidates in this year’s race should “at least release a short list of possible running mates now” before the voting begins.  Bai’s column focused on Senator Bernie Sanders but he would apply this requirement to former Vice President Joe Biden and Senator Elizabeth Warren, too. Gordon Weil has suggested that Biden, Sanders, and Donald Trump disclose their running mates before the first caucuses and primaries.  Jared Cohen argues that a Democratic presidential candidate should name his or her running mate “now” as a strategy to “surge ahead of a crowded field.”        

 

The recent history of vice-presidential selection and of the vice presidency generally suggests, however, that these well-intended proposals rest on mistaken judgments about the way the vice-presidential selection system now operates, exaggerate the benefits of the proposed reforms, and underestimate the difficulty of implementing those remedies.

 

The vice presidency, especially as it has developed in recent decades, has two significant functions.  On an ongoing basis, the vice president provides a close adviser and trouble-shooter for the president who can improve the quality and implementation of American public policy.  The vice president also serves the contingent function of providing a qualified and prepared presidential successor (in case of a presidential death, resignation, or removal) or temporary pinch-hitter (in case of a presidential inability).  These two functions require that the vice president be presidential and politically and personally compatible with the president.

 

The political calendar presents an apparent challenge, however, since vice-presidential candidates are chosen with an eye towards the November election. Presidential candidates invariably consider political factors in choosing a running mate.

 

Yet increasingly most presidential candidates conclude that their political and governing interests coincide when choosing a running mate and that both dictate choosing a running mate who is capable of being president. Most vice-presidential candidates during the last six or seven decades have been politicians whose past or subsequent public service marked them as presidential figures as perceived by reachable voters. Recent vice presidents Richard M. Nixon, Lyndon B. Johnson, Hubert H. Humphrey, Walter F. Mondale, George H.W. Bush, Al Gore, Dick Cheney, and Joe Biden were among their parties’ leading lights when chosen for their ticket.  Defeated running mates like Estes Kefauver, Henry Cabot Lodge, Edmund Muskie, Bob Dole, Lloyd Bentsen, Jack Kemp, Joe Lieberman, Paul Ryan, and Tim Kaine were also among their parties’ most well-regarded figures when chosen. Dan Quayle is often mocked but he was a well-regarded senator who made important contributions to the Bush administration.  Geraldine Ferraro’s credentials (three terms in the House of Representatives) were more modest than most but she was chosen when women were largely excluded from public service.  Senators like John Sparkman and especially Tom Eagleton had distinguished careers in the upper chamber (Eagleton was one of two former senators given the unprecedented honor of speaking when the Senate celebrated its bicentennial) and Sargent Shriver had performed ably in the executive branch in domestic and foreign policy roles. Mike Pence is widely viewed as a plausible future presidential aspirant.  The questionable choices—William Miller, Spiro T. Agnew, John Edwards, Sarah Palin—were chosen by candidates facing uphill races (Miller, Palin) or were simply mistakes.

 

History suggests that Rall’s reform (that presidential candidates announce their running mate when they announce their candidacy) and Weil’s (that certain older candidates announce their running mates before caucus or primary voting begins) would diminish the quality of vice-presidential candidates.  Rall’s and probably Weil’s proposal would eliminate from consideration anyone running for president, thinking of that option or supporting a rival candidate.  John F. Kennedy could not have chosen Johnson, Ronald Reagan Bush, and Barack Obama Biden.  The possibility that Humphrey would run in 1976 would have precluded Mondale, a Humphrey protégé, from being Jimmy Carter’s running mate.

 

Moreover, the context in the summer of a presidential election year, when presidential candidates identify their running mates, is more conducive to a good choice than a year or two earlier when presidential candidates now announce. Some successful presidential nominees—Jimmy Carter, Mike Dukakis, Bill Clinton, Obama, Trump, among others—were surprises.  It’s inconceivable that Mondale, Bentsen, Gore, Biden or Pence would have cast their lot with these improbable presidential nominees.  Kemp wouldn’t have signed on as Dole’s running mate early on either.  Cheney declined to be considered as George W. Bush’s running mate in spring 2000 but changed his mind after working with Bush suggested what his role might be given Bush’s operating style.  

 

The transformation presidential candidates experience between candidacy announcement and vice-presidential selection encourages better decisions.   Understandably, presidential candidates and their top aides initially focus on securing the nomination.  The vice-presidential selection becomes their major pre-occupation once that success is assured.  Carter would not have chosen Mondale initially.  Carter only concluded Mondale was his best option only after they spent time together and Carter spoke to others who knew the prospective candidates. Dole disparaged Kemp as “the Quarterback” but after examining alternatives concluded that choosing this long-time rival made sense.  Sometimes presidential candidates learn about running mates as they interact through the primary process, an experience that probably contributed to Romney’s selection of Ryan and Obama’s of Biden.

 

Finally, vice-presidential candidates are chosen after a long, intensive and intrusive vetting process.   That essential part of the vice-presidential selection process would be prevented if the decision were made as Rall and Weil propose.

 

Cohen recommends naming a running mate now for its political, not governing, benefits.  Yet it is not at all clear that the sort of running mate who could conceivably move the needle in a positive direction would decide to join forces at this early stage with a presidential candidate who needed to pursue such an atypical strategy to succeed.  Unless a presidential candidate secured a running mate who was appealing and ready for prime time, his or her candidacy would likely be hurt, rather than helped.

 

Bai starts from a sensible premise (that the likelihood of a succession is greater with an older president) and his proposal, that older candidates announce a list of perhaps three running mates, is more limited in its reach (to older candidates) and is less intrusive (a list, rather than a choice).  Yet history shows that vice-presidential selection is not so unilateral as he suggests and he is overly-optimistic regarding the merits of his proposal.

 

Although presidential candidates select a running mate they do so after lengthy consultation and after considering the likely reaction of reachable voters.  Bai is right that the 2008 Republican convention accepted Palin but that was because it liked her and inferred from her selection that Senator John McCain was more conservative than the delegates feared.  In fact, part of the reason McCain chose Palin, not Lieberman, was he feared an adverse reaction if he chose Gore’s former running mate. Sure, conventions don’t want to make trouble for their presidential nominee but they rubber stamp vice-presidential nominees partly because nominees weigh party sentiment heavily in making the choice.  And party sentiment doesn’t prevent a bad choice.  Edwards had done well in the primaries and was very popular with the Democratic electorate and convention.

 

A requirement that older candidates disclose a list from which they would choose their running mate would be hard, if not impossible, to implement and counterproductive.  A list limited to three would exclude some who, come summer, were attractive running mates.  Rival candidates might be omitted but if not, they might feel compelled to slam the door on the vice presidency to preserve credibility as a presidential candidate.  An unlimited list would be unrevealing.  In 1988, the Bush campaign leaked some possible running mates on the eve of the convention.  It was sufficiently long that the likelihood that Quayle would be chosen seemed so inconceivable that his prospects didn’t draw any scrutiny.  And if such a proposal has merit, which it doesn’t, why not apply it to all candidates since the modern vice presidency helps in governing more often than it supplies a successor and the selection provides information regarding a candidate’s values.

 

There’s no harm in asking about vice-presidential selection, as Bai and Weil propose, but succession concerns can be better addressed by insisting that candidates of both parties release meaningful information of their medical history and that presidential candidates choose running mates who would be plausible presidents.  Presidential candidates generally realize that choosing a running mate who cannot withstand the intense scrutiny of a national campaign, including a vice-presidential debate, is bad politics as well as bad government.  Public expectations of the enhanced vice-presidential role and recognition of the possibility of succession gives presidential candidates greater reason to choose well than was once the case.

 

The vice-presidential selection system has worked pretty well in recent decades and has allowed presidents to enlist the help of able vice presidents who are compatible with them.  That progress is best preserved by giving presidential candidates incentive to choose well, not by introducing artificial and counter-productive requirements.

 

Copyright Joel K. Goldstein 2020

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174243 https://historynewsnetwork.org/article/174243 0
Roundup Top 10!  

 

Who’s Really Shredding Standards on Capitol Hill?

by Joanne Freeman

Naming the alleged whistle-blower is much worse than tearing up a speech.

 

Bernie Sanders Has Already Won

by Michael Kazin

Whether he captures the White House or not, he has transformed the Democratic Party.

 

 

What winning New Hampshire — and its media frenzy — could mean for Bernie Sanders

by Kathryn Cramer Brownell

The New Hampshire returns tell us a lot about the leading candidates.

 

 

America held hostage

by David Marks

Forty years after the Iran hostage crisis, its impact endures.

 

 

Is Pete Buttigieg Jimmy Carter 2.0?

by J. Brooks Flippen

To win the White House and be a successful president, he must learn from an eerily similar candidate.

 

 

When White Women Wanted a Monument to Black ‘Mammies’

by Alison M. Parker

A 1923 fight shows Confederate monuments are about power, not Southern heritage.

 

 

Donald Trump’s continued assault on government workers betrays American farmers

by Louis A. Ferleger

Government scientists made U.S. agriculture powerful, but Trump administration cuts could undermine it.

 

 

The Civil War Wasn't Just About the Union and the Confederacy. Native Americans Played a Role Too

by Megan Kate Nelson

“Inasmuch as bloody [conflicts] were the order of the day in those times,” their report read, “it is easy to see that each comet was the harbinger of a fearful and devastating war.”

 

 

The forgotten book that launched the Reagan Revolution

by Craig Fehrman

While Reagan’s biographers have explored the influence of GE and SAG on the budding politician, they’ve largely ignored what came next — namely “Where’s the Rest of Me?”

 

 

Shifting Collective Memory in Tulsa

by Russell Cobb

The African-American community is working to change the narrative of the 1921 massacre.

 

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174319 https://historynewsnetwork.org/article/174319 0
The Communist Manifesto Turns 172

This month marks 172 years since the first publication of the Communist Manifest. All around the world people will be commemorating February 20th with group read-alouds, and other ways of noting the occasion. Undoubtedly, this is a moment that we should not allow to pass without some reflection on the meaning to us today of Marx and Engels’ pamphlet. Originally published anonymously and in German by the Workers’ Educational Association in 1848, an English translation of the Manifesto would not appear until 1850. For the first decades of its life the Manifesto was mostly forgotten, and it would not be published in the United States until 1872. We are living at a time when – if not communism – at least socialism is gaining ground in this country, to a degree that few could foresee only a decade ago. Bernie Sanders, for example, is a self-proclaimed Democratic Socialist, and a frontrunner among the Democratic candidates seeking the presidential nomination. When it comes to communism, however, there are still grave misgivings about being labeled as such even by those who identify with the radical left. At the same time, we are entering an era of unprecedented inequality; in which wealth has become concentrated in the hands of a few to a degree that is almost hard to imagine – when literally three or four individuals in this country for instance have the wealth exceeding the total wealth of over fifty percent of the population. The vast inequality and ever growing concentration of capital is one of the many reasons why the Manifesto is as important now – if not more so – than when it first saw the light of day during that fateful year of 1848. Income inequality in this country has been growing for decades. The Pew Research Center reports that in 1982, the highest-earning 1 percent of families received 10.8 percent of all pretax income, while the bottom 90 percent received 64.7 percent. Three decades later, the top 1 percent received 22.5 percent of pretax income, while the bottom 90 percent’s share had fallen to 49.6 percent. As Helene D. Gayle, CEO of the Chicago Community Trust, observed, “The difference between rich and poor is becoming more extreme, and as income inequality widens the wealth gap in major nations, education, health and social mobility are all threatened.” The gap between those who have and those who have not is becoming ever wider – while the rights of workers are under attack around the world. Union leaders are threatened with violence or murdered. Indeed, the International Trade Union Confederation reports that 2019 saw “the use of extreme violence against the defenders of workplace rights, large-scale arrests and detentions.” The number of countries which do not allow workers to establish or join a trade union increased from 92 in 2018 to 107 in 2019. In 2018, 53 trade union members were murdered – and in 52 counties workers were subjected to physical violence. In 72 percent of countries, workers have only restricted access to justice, or none at all. As Noam Chomsky observed, “Policies are designed to undermine working class organization and the reason is not only the unions fight for workers' rights, but they also have a democratizing effect. These are institutions in which people without power can get together, support one another, learn about the world, try out their ideas, initiate programs, and that is dangerous.” In fact, labor union membership has been declining for well over fifty years right here in the US. Unions now represent only 7 percent of private sector workers – a significant drop from the 35 percent of the 1950s. Moreover, studies have shown that strong unions are good for the middle-class; the Center for American Progress reports, for example, that middle-class income has dropped in tandem with the shrinking numbers of US union members. This weakening of unions and collective bargaining has allowed employer power to increase immensely, contributed to the stagnation of real wages, and led to “a decline in the share of productivity gains going to workers.” Around the world, children are still forced to labor in often unsafe and extremely hazardous conditions. Approximately 120 million children are engaged in hazardous work – and over 70 million are under the age of 10. The International Labour Organization estimates that 22,000 children are killed at work globally every year. The abolition of child labor was of course one of the immediate reforms demanded in the Manifesto – and 172 years later it has yet to become a reality. Studies estimate that as many as 250 million children between the ages of 5 and 14 work in sweatshops in developing countries around the world. The US Department of Labor defines a sweatshop as a factory that violates two or more labor laws. They often have poor and unsafe working conditions, unfair wages and unreasonable hours, as well as a lack of benefits for workers. Economists sometimes argue that sweatshops help to alleviate poverty, that as bad as they are they are still better than working in rural conditions. These claims are dubious at best – but more to the point, sweatshops are inconsistent with human dignity. As Denis Arnold and Norman Bowie argue in their essay “Sweatshops and Respect for Persons”: the managers of multinational enterprises that “encourage or tolerate violations of the rule of law; use coercion; allow unsafe working conditions; and provide below subsistence wages, disavow their own dignity and that of their workers.” It is often assumed – wrongly – that Marx and Engels described in full what they thought the future communist society would look like. But aside from a few tantalizing suggestions they offered very little in this regard – not in the Manifesto, nor anywhere else, preferring instead to analyze the social contradictions inherent to the capitalist mode of production itself – contradictions which they thought would lead inevitably to its demise. One thing that is clear however from their few suggestions is that workers would not be alienated from the process of production and from the fruits of their labor – which implies something like worker self-management, workplace democracy – or, perhaps most accurately, worker self-directed enterprises, to borrow a phrase from economist Richard Wolff. As Wolff points out, these enterprises “divide all the labors to be performed… determine what is to be produced, how it is to be produced, and where it is to be produced” and, perhaps most crucially, “decide on the use and distribution of the resulting output or revenues.” Such firms of course exist already; most notably, for example, Mondragon in Spain. We know conclusively that workplace democracy can and has been successful – and that they can in fact outcompete traditional, hierarchically organized capitalist firms. All of which is to say that the Communist Manifesto is not a historical relic of a bygone era, an era of which many would like to think we have washed our hands. As long as workers’ rights are trampled on, and children are pressed into wretched servitude; as long as real wages stagnate, so that economic inequality continues to grow, allowing wealth to be ever more concentrated in the hands of the few – then the Communist Manifesto will continue to resonate and we will hear the clarion call of workers of the world to unite, “for they have nothing to lose but their chains. They have a world to win.”

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174316 https://historynewsnetwork.org/article/174316 0
How Somalis Use Theatre to Rebuild Culturally Recently Mogadishu suffered yet another horrific terrorist attack, the deadliest in two years, killing nearly 80 people and wounding more than 100 people. Much of the discourse about wars such as this merely chronicles numbers: lives lost, dollars of damage done, years to rebuild. The focus of post-conflict transition, then, becomes remains on the recovery of formal institutions, the type of the regime, the growth of the economy, and the strength of the electoral processes. However, we seldom make room for the equally important process of cultural rebuilding that gradually takes place, in particular the efforts to rescue democratic spaces that facilitate everyday peace. For Somalis, the arts, especially theatre has remained a crucial site for the social and political reimagination of the nation. 

 

Modern Somali theatre rose to prominence in the 1960s, the period following the independence and the subsequent unification of Somaliland and Somalia. By the 1970s, there were multiple shows a night in any given city and the average Somali adult was considered a regular theatre goer regardless of socioeconomic status.

 

Several elements make the Somali theatre tradition unique, most distinctly its inextricable tie to poetry. Large parts of the play, usually the ones that carry the emotional weight of the play are conducted in verse. The ease with which poetry and prose coexist on stage reflects the unparalleled space that poetry occupies in Somali culture. For Somalis, poetry signifies immense national and linguistic pride. Poetry consistently outsells fiction and public readings routinely draw massive crowds. In fact, the successful playwrights of the twentieth century were poets themselves, echoing the fluidity that exists between the genres. 

 

And since the rise of Somali theatre coincided with the golden age of Somali music, a live band accompanies most theatrical performances. This combination has allowed for the sustained musical component in Somali plays, where characters intermittently perform songs to convey difficult emotions or profess their love. During the song, audience members are allowed to go on stage to deliver flowers or dance with the actors. As the song concludes, the fourth wall resumes. 

 

It is on stage that most urgent of social issues are deliberated and tacit taboos tackled. Successful plays, such as Hablayahow Hadmaad Guursan Doontaan (Ladies When Will You Marr?) and Beenay Wa Run (The Lie is the Truth) take up questions about feminism, Somalia’s increasing integration into global geopolitics, religion, and the changing conventions of love and marriage. During the 1960s and 1970s, northern artists performed alongside southern ones with only accents marking them apart. In a society, where regional and clan divisions govern almost all aspects of social life, the stage was a significant exception. Gender roles were rigorously contested, expectations revisited. Female artists formerly considered social outcasts and an affront to familial and communal honor became widely revered. And to this day, the stage remains the only acceptable and safe place for a man to cross-dress. 

 

The popularity of theatre and the social latitude allowed to artists to comment and contest the status quo have earned poets and playwrights an irrevocable place in the historical trajectory of the region. It is widely believed that the 1968 play Gaaraabidhaan (Glow Worm) by famed playwright Hasan Shiekh Muumin inspired Siyad Barre’s military coup in 1969. And it was the poets and the playwrights of the 1980 who proved instrumental in the resistance against Barre’s oppressive regime. For many Somalis, Landcruiser, a play by Cabdi Muxumed Amiin staged at the National Theatre of Somalia in 1989, incited the uprising that eventually led to Barre’s fall. 

 

Theatre and poetry continued to play a crucial role in the post-rapture period, serving as a tool to initiate peace talks and promote social healing. After the collapse of the central government and the start of the civil war in Somalia, a collective of Somali poets, singers, and playwrights from across the region staged a play in Mogadishu, titled Qoriga Dhig, Qaranka Dhis (Put Down the Gun, Build the Nation). In the early 2000s, Mohamed Ibrahim Warsame Hadraawi, hailed as the greatest living Somali poet, embarked on Socdaalka Nabada (Peace Journey). Walking the entire length of Somalia, including a visit to the prison in which he spent five years for his poetry, Hadraawi performed for a renewed commitment to peacebuilding. A citizen of Somaliland, Hadraawi expressed his resistance to the severing of cultural ties along nationalist borders. IREX Europe, a non-profit that supports democracy and human rights initiatives, led a UN-funded theatre and poetry caravan across Somaliland in 2010 to facilitate post-conflict dialogue. 

 

As the civil war continued to ravage Somalia and Somaliland began the gradual process of recovery, formal support for the arts largely disappeared. Yet, there endured sustained grassroot efforts to preserve the artistic heritage of Somalis and cultivate sites for the engagement in cultural production. After Al-Shabab captured large parts of Somalia in the mid 2000s and banned music and other forms of entertainment, people organized underground concerts and shared music in clandestine memory cards risking imprisonment or worse execution. In Somaliland, the Hargeisa Cultural Center is home to over 14,000 cassette recordings of plays and music that were collected, preserved, and donated by individuals who saved them as they fled the war that reportedly destroyed over 90 percent of their city. 

In 2012, the Somali National Theatre in Mogadishu re-opened its doors for the first time in 20 years after ordinary citizens and local businesses partnered with the first transitional federal government to raise the funds required to restore the theatre. Its first play, a comedy, garnered an audience of around a thousand people. Two weeks later a suicide bomber attacked the theatre, killing 10 people and wounding many more. The restoration of the Hargeisa National Theatre began a few years ago when the government sold it to a private developer. To calm the public apprehension about the privatization of the prominent cultural landmark, the developers promised to place the restored 3500-seat theatre at the heart of their new seven-story commercial center. 

 

The collective ownership of high art not by an elite few but by the unremarked many is arguably Somalis’ most cherished and best sustained experiment in democracy. A heightened communal experience, Somali theatre, at its core, demands a tenacious faith in a public. This past year, the Somali National Theatre has once again attempted a reconstruction; large crowds once again gathered for light-hearted, politically poignant entertainment in defiance of the terror to which they and their city are frequently subjected. The persistence of theatre and poetry to govern the daily discourse of a wounded people, to permeate in each one of their recovering buildings and fractured lands indicates not only the resilience of the Somali people but the unflagging democratic spirit that resides within them. 

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174295 https://historynewsnetwork.org/article/174295 0
Who Deserves the Credit for a Good Economy? Ronald L. Feinman is the author of “Assassinations, Threats, and the American Presidency: From Andrew Jackson to Barack Obama” (Rowman Littlefield Publishers, 2015).  A paperback edition is now available.

In the State of the Union speech, President Donald Trump emphasized the strength of the American economy and took credit for an economic boom. As this claim will likely dominate Trump’s reelection campaign, it’s valuable to examine the last 50 years of presidential and economic history. 

 

The long economic expansion of the 1960s under Democrats John F. Kennedy and Lyndon B. Johnson ended in 1969 during the Nixon Administration, with a nearly year long recession until late 1970, followed by a longer recession under Nixon and Ford from late 1973 to early 1975. It was directly caused by the Arab Oil Embargo, after the Yom Kippur War between Egypt and Israel in October 1973, and caused high inflation as well as rising unemployment.

 

The short recession of the first half of 1980 under Democrat Jimmy Carter was also related to the second Arab Oil Embargo, which led to high inflation in 1979 and 1980, as in 1974-1975,  with both recessions and inflationary spirals major factors in the electoral defeats of Ford in 1976 and Carter in 1980.  Of course, Ford was also harmed by the pardoning of Richard Nixon and Carter was unpopular for his handling of the Iranian Hostage Crisis and the Soviet invasion of Afghanistan in the year before his reelection campaign.

 

In the Reagan Presidency, a more serious recession occurred, leading to the highest unemployment rate since 1939, provoked by the Federal Reserve’s effort to rein in the high inflation that still existed after Carter lost reelection.  Fortunately for Reagan, the recovery that came about in 1983-1984 led to a landslide reelection victory in 1984. 

 

During the first Bush Presidency, a recession occurred in the last half of 1990 into early 1991, caused by the tough economic restraints of the Federal Reserve and the effects of the Tax Reform Act of 1986 on real estate. This led to a lingering high unemployment rate.  Despite many people’s approval of Bush’s handling of the Gulf War, the troubles in the economy plus the independent candidacy of H. Ross Perot in 1992 influenced Bush’s 1992 loss.

 

During the administration of George W. Bush, two recessions occurred. The first lasted from March to November 2001 and was caused by the dotcom bubble, accounting scandals at major corporations, and the effects of the September 11 attacks. The economy quickly bounced back and Bush won reelection in 2004. 

 

However, a much more serious economic downturn called “The Great Recession” took effect from December 2007 to June 2009 and was caused by a major housing bubble. This hurt John McCain’s presidential campaign in 2008 as many people wanted a change in leadership.  This economic collapse was worse than the Ford or Reagan recessions in its long-term effects, and it posed a major challenge for Barack Obama as he entered office with the worst economy of any president since Franklin D. Roosevelt in 1933.

 

Barack Obama rose to the challenge and presided over the most dramatic drop in unemployment rates in modern economic history. The unemployment rate peaked at 10 percent in the fall of 2009. By the time Obama left office in January 2017, the unemployment rate had fallen to  4.7 percent. The stock market rose by about 250 percent in the Dow Jones Industrial Average from 2009 to 2017.

 

By comparison, Franklin D. Roosevelt came into office with a 24.9 percent rate of unemployment in 1933. Unemployment dropped every year through 1937 to 14.3 percent, but then rose with a new recession causing the unemployment rate to rise to 19 percent in 1938 and 17.2 percent in 1939. The unemployment rate then went down to 14.6 percent in 1940, 9.9 percent in 1941, and finally, with World War II in full swing, it lowered to 4.7 percent in 1942 and under 2 percent for the remainder of the war years. 

 

Clearly, Donald Trump has benefited from what is now the longest economic expansion in American history. The unemployment rate has dropped to as low as 3.4 percent. The question that lingers is who deserves the credit? Much of the hard work that created economic recovery came under Obama’s administration, and is simply continuing for now under Trump, which may benefit him in November 2020.

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/blog/154312 https://historynewsnetwork.org/blog/154312 0
Remembrances, Race, and Role Models: The Renaming of a Middle School

 

After students of the Richard B. Haydock Academy of Arts and Sciences studied the City of Oxnard’s history, on May 15th of last year two middle schoolers petitioned the Oxnard School District board of trustees to rename their campus. Why? Because Haydock, as an early twentieth century superintendent of the district and a long-time councilmember, espoused racist views toward people of color. Furthermore, as a public official he advanced policies of racial exclusion in housing, public facilities, and schools.

 

The students discovered this from their study of David G. Garcia’s Strategies of Segregation: Race, Residence, and the Struggle for Educational Equality (2018).

 

Garcia, a UCLA professor and product of Oxnard’s public schools, documented how city and Ventura County officials purposefully set policies that injured the life chances of people of color by the provision of inferior housing and educational opportunities. While denying services enjoyed by Oxnard’s white residents, in 1917 councilmember Haydock blamed victims of municipal neglect when he stated, “We have laws to prevent the abuse of animals . . . but the people are allowed to abuse themselves. The ignorant are allowed to breed under conditions that become a threat and a menace to the welfare of the community.” Who were “the people” and “The ignorant,” according to Haydock? Ethnic Mexicans.

 

Four years later, at an Oxnard Rotary Club assembly, Haydock publicly bemoaned the presence of African Americans in the United States. For him and others of his class of this era, such as Superintendent of Ventura County Schools Blanche T. Reynolds, the restriction of people of color was the solution.

 

This history matters as it informs us of persistent social and economic inequities long after racist strategies—in the form restrictive real estate covenants, residential redlining, gerrymandered school attendance boundaries engineered around segregated neighborhoods, and employment practices—were deemed illegal. This knowledge also complicates an appreciation of our nation’s ethos of equal opportunity contrasted by official actions that prohibited the realization of this value for people of color and women. For example, our history informs us that we are a nation endeared with democratic tenets simultaneously as elected leaders decreed oppressive acts of land dispossession, genocide, and slavery.

 

But should nefarious practices and views advanced by figures such as Haydock be completely stricken from public memory? Absolutely not. Just as I don’t favor the textual removal of racially restrictive covenants in residential deeds (although I support their inert state), the new name of the Academy of Arts and Sciences, whatever it will be, should have a recognizable footnote so people can learn not only about the totality of Oxnard’s history but also its relationship with larger national currents. The Supreme Court case of Brown vs. Board of Education, 1954 comes to mind as it declared unconstitutional the separate but equal doctrine of Plessy v. Ferguson, 1896.

 

Let’s not forget Mendez v. Westminster of 1946 that preceded Brown. For this case, future U.S. Supreme Court Justice Thurgood Marshall assisted in the writing of an amicus curiae brief for the National Association for the Advancement of Colored People.

 

By the study of this history, students can appreciate struggles for social justice by people from all backgrounds as well as the challenges that lay ahead.

 

And yes, past sins are inalterable. But we can motivate students to be agents of change for the better. Especially as we hear racist slurs of the past echoed by President Donald J. Trump.

 

So, I propose that the school be renamed the Rachel Murguia Wong Academy of Arts and Sciences. Raised in an era when ethnic Mexican students were not only segregated but also corporally and psychologically assaulted for speaking California’s first European language (Spanish), Murguia Wong committed her life to young people. As a Ventura County resident, she served on numerous civic and educational advisory committees. And after her work as an OSD community-school liaison in La Colonia’s Juanita (now Cesar Chavez) Elementary, Murguia Wong won an elected seat on the district’s board of trustees in 1971.

 

As a trustee, Murguia Wong championed Title 1 compensatory programs, teacher diversity, as well as the district’s full compliance with the summary judgment of Judge Harry Pregerson in Soria v. Oxnard School District Board of Trustees of 1971. Based on the agreed-upon facts in this case, Pregerson ordered busing as a means to dismantle the decades-long de facto (unofficial) segregation of its schools.

 

After a resistant board majority appealed the decision, the Ninth Circuit ordered a trial in 1973. This time school board minutes of the 1930s surfaced that documented the district’s implementation of de jure (official) segregation in violation of the plaintiffs’ rights of equal protections guaranteed under the Fourteenth Amendment of the United States Constitution.

At the behest of white parents, districts records of 1937 and 1938 revealed that Superintendent Haydock, in collusion with the trustees, schemed byzantine strategies to segregate ethnic Mexican students. In a time when social Darwinist ideas of Anglo-Saxon superiority was popular, miscegenation was the primary fear of the nation’s white establishment.

 

Hence, Murguia Wong, unlike Haydock, was on the right side of history. The renaming of the Academy in her honor would provide opportunities for all the children to learn Oxnard’s nuanced history.

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174247 https://historynewsnetwork.org/article/174247 0
The Erasure of the History of Slavery at Sullivan’s Island

 

The history of slavery in America is, to a great extent, the history of erasure.  For most of the century and a half since the Civil War ended, families, communities, churches, universities, banks, insurance companies, and a host of other institutions have gone out of their way to ignore past involvement in what Rev. Jim Wallis of Sojourners has labeled “America’s original sin.”

 

I was reminded of this a few months ago when my wife and I attended the annual meeting of the Association for the Study of African American Life and History in Charleston, South Carolina.  As we hadn’t visited the city in some three decades, we occasionally played hooky from the conference to go be tourists.  One morning, we took a commercial boat tour of Charleston harbor.  The captain gave a running commentary throughout our voyage as we passed major sites including the Battery, Fort Sumter, and others.  When we made the long drift alongside Sullivan’s Island at the entrance to the harbor, he shared the most extensive portion of his monologue.  We learned of the Battle of Fort Moultrie there during the American Revolution, of the island’s inflated real estate market, and of its many celebrity homes.

 

Curiously, the captain’s commentary completely ignored the most significant part of the history of Sullivan’s Island: its role in the importation of enslaved Africans.  The island served as a quarantine station for Africans arriving on slave ships, who spent days or weeks in “pest houses” until deemed “safe” for public auction.  Some 40% of the nearly 400,000 Africans imported into British North America and the young United States passed through this place.  It has been termed the “Ellis Island” of African Americans.

 

As someone who has specialized in African American history throughout my academic career, with a particular focus on slavery and abolition, this example of erasure proved particularly jarring.  It shouldn’t have.  I know, for example, that southern plantation homes regularly fail to inform tourists about the enslaved people who toiled at these places. Nevertheless, the warm sea breeze we had experienced during the earlier part of the tour immediately evaporated in a cold bath of anger and sadness.  Our effort to be tourists had failed to isolate me – even momentarily – from an awareness of the extent to which Americans still seek to eradicate the heritage of slavery from our collective consciousness.

 

Near the end of the harbor tour, almost in passing, the captain pointed out the site of a new African American history museum opening in Charleston.  Indeed, officials broke ground for the International African American Museum on October 25, 2019.  It is expected to greet visitors in 2021.  The press release announcing the ground breaking observed that the museum will “illuminate the story of the enslaved Africans who were taken from West Africa, entered North America in Charleston, SC, endured hardship and cruelty, and then contributed so significantly to the greatness of America. The museum . . . will honor the site where enslaved Africans arrived.”

 

Maybe the new museum, along with other recent developments of a similar nature, is a sign that we can get beyond the usual erasure of slavery from our national mind.  It would be a welcome change.

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174248 https://historynewsnetwork.org/article/174248 0
The Making of a Periphery

 

The islands of Southeast Asia were once sites for the production and trade of prized commodities—including cloves, nutmeg, and mace—so valuable that in the fifteenth and sixteenth centuries they attracted seafaring explorers and traders from distant European countries. Today Malaysia, the Philippines, and Indonesia principally export their surplus of cheap labor. Over ten million emigrants from Island Southeast Asia provide their labor abroad. How did this once-prosperous region transform into a “peripheral” one? This is the question I am addressing in my new book, The Making of a Periphery: How Island Southeast Asia Became a Mass Exporter of Labor, published with Columbia University Press.

 

The word periphery is a classical concept that figured prominently in the work of Nobel Prize winner Arthur W. Lewis and has become quite famous thanks to Immanuel Wallerstein’s world-system analysis. The work of Lewis and Wallerstein is of immense importance for understanding why parts of the world that were relatively prosperous in the past have sunk to the lower or even lowest echelons of economic performance today. This question has been taken up Daron Acemoğlu and James Robinson in their bestseller Why Nations Fail. The strength of Wallerstein and Acemoğlu & Robinson is that they explain global divergences from a historical perspective using a single theory. Whereas history is indeed crucial for economic analysis, an unavoidable drawback of unifying theories is that these homogenize our understanding of complicated and diverse processes of long-term historical change. At the same time, it is impossible to do any serious global history and contribute to development economics without any theoretical and unifying perspective.

 

A way out of this dilemma is to start from the generally accepted position that plantation economies have a long-term negative effect on economic development. In this respect Island Southeast Asia resembles the Caribbean nations, where the legacies of the plantation economies consisted of meagre economic growth and massive unemployment. Today, massive emigration is the fate of the Caribbean region as much as it is of Island Southeast Asia. As Arthur W. Lewis has pointed out, the problem was not that plantations were sectors of low productivity, but that the unlimited supplies of labor in these regions suppressed wages. 

 

A central argument in my book is that Lewis’ thesis of the unlimited supplies of labor is still important for understanding how parts of the world have become a periphery. We know for the Caribbean where this labor came from: millions of Africans were kidnapped, enslaved, and transported across the Atlantic Ocean to produce sugar, tobacco and other crops for Europe and America. But where did the masses working at the plantations in the Philippines, Malaysia and Indonesia come from? For Malaysia it is clear that its plantations and mines imported Chinese and Indian labor on a massive scale. But for the Philippines and Indonesia it was natural demographic growth that guaranteed abundant labor supplies. One of the most fascinating stories my book deals with is the relatively successful smallpox vaccination in Java and the northern islands of the Philippines in the early years of the nineteenth century. The vaccine resulted in a precocious demographic growth of over 1.5 percent per annum. Together with a stagnant manufacturing sector and declining agricultural productivity, this created the abundant labor supplies for the developing plantation economies.

 

Still, this abundance of labor was not a sufficient cause for a region to be turned into a periphery. Coercion was another crucial factor. We know enslaved workers were coerced by the whip to grow commodities, but massively left the plantations after emancipation, even though poverty was waiting. Coercion was also a necessary condition for the plantations in Southeast Asia. The plantation economies that emerged in parts of the Philippines and particularly in Java in the nineteenth century could not function without the collaboration of local elites and existing patron-client relationships. Local aristocrats and village elites supported the plantation economy in their role as labor recruiters and by forcing villagers to rent their land to plantations. They shared in the profits for each worker and for each piece of land they managed to deliver.

 

In the Northern Philippines and Java, plantation economies were successfully embedded in existing agrarian systems. The Dutch introduced forced coffee cultivation in the early eighteenth century and a more comprehensive forced cultivation system on Java in 1830. Local elites played a crucial facilitating role in this transformation of existing agrarian and taxation systems for colonial export production. Java in particular suffered from economic stagnation and its population from malnutrition at the peak of the colonial plantation economy. Per capita income lagged behind other parts of the Indonesian archipelago, where independent peasants produced rubber, copra or coffee for the global markets. 

 

Once Indonesia and Malaysia had become free and independent nations, in 1949 and 1965 respectively, their governments branded plantations as colonial institutions and encouraged smallholder cultivation. They did so for a perfectly good reason: to ensure the revenues would benefit local development. Unfortunately, this decolonization was never completed. Palm oil, one of the world’s most important tropical commodities, has been a driving force in the establishment of new plantation regimes in Indonesia and Malaysia, which are the world’s first- and second-largest producers of this commodity. Over the past decades, we have seen the return of appalling coerced-labor conditions that were supposed to have been buried alongside colonialism. Palm oil plantations cause not only grave ecological damage, but also serious human rights violations.

 

The peripheral position of Southeast Asia in the world of today is the result of a long-term development, as many scholars from Immanuel Wallerstein to Daron Acemoğlu have pointed out. But high demographic growth and local systems of labor bondage are crucial elements in the making of a periphery. This book invites us to rethink the geography of colonialism, in which the Southeast Asian and Caribbean archipelagos share a history of massive coerced plantation work and present day mass emigration.

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174253 https://historynewsnetwork.org/article/174253 0
The 1619 Project Debate with History of Slavery in New York City Author's note: “Represent NYC” is a weekly program produced by Manhattan Neighborhood Network (MNN). The show’s guests usually discuss topics like affordable housing, education policy and domestic violence. I was invited to discuss the New York Times’ 1619 Project and the long-term impact of slavery on New York and American society for a Black History Month broadcast. This post includes the questions I prepared to answer and notes for my responses. New York will soon need its own 1626 Project. 1626 was the year that the Dutch West India Company brought the first eleven enslaved Africans to the New Amsterdam colony that would later become New York City.

 

(1) Why has there been controversy over the New York Times 1619 Project?

 

There has been controversy over assertions made in the New York Times’ 1619 Project on the impact of slavery on the history of the United States. I am not knowledgeable on everything discussed in the project report, particularly African American cultural contributions, but I think the history was basically very sound and the projects critics are off base. Three areas of contention are the role slavery played in fomenting the American Revolution, Abraham Lincoln’s attitude toward Black equality, and the white role in the struggle for African American Civil Rights. While there should always be legitimate disagreement about historical interpretation, I tend to side with Nikole Hannah-Jones and the 1619 Project.

 

American slaveholders lived in constant and hysterical fear of slave revolt and were always terrified that Great Britain would not provide adequate protection for their lives and property. In 1741, white New Yorkers invented a slave conspiracy and publicly executed more than two-dozen enslaved Africans in Foley Square. Tacky’s Rebellion in Jamaica in 1760, which took months to suppress and led to the death of sixty white planters and their families, sent shock waves through British North America.

 

As President, Abraham Lincoln preserved the nation, but he never believed in Black equality. In 1861 he endorsed the Corwin Amendment that would have prevented the federal government from interfering with slavery in the South. In his December 1862 State of the Union message, a month before issuing the Emancipation Proclamation, Lincoln offered the South a deal which they rejected. He proposed keeping the slave system until 1900, that slaveholders be compensated when Blacks were freed, and that freed Africans would voluntarily be resettled in Africa, the Caribbean, and Central American colonies. In his second inaugural address just before he was assassinated, Abraham Lincoln offered amnesty to rebel states and slaveholders that would have left freedman technically free but in a perpetual state of subservience.

 

Last, there were many whites prominent in the African American Civil Rights movements after 1955 and Montgomery, but very few before except for leftist activists, and unfortunately, too many walked away from King and the Civil Rights movement after passage of the 1964 Civil Rights Act and the 1965 Voting Rights Act.

 

2) What are the top ways America benefited economically from the slave trade and from free labor? What about New York? 

 

Before 1500 the world was regionalized with little interaction between people and regions. What is now the United States was sparsely settled by hundreds of indigenous groups including the Leni Lenape, an Algonquin people who lived in what would become the New York metropolitan area. 

 

The Columbian Exchange launched the first wave of globalization and started the transformation of that world into the interconnected, globalized world we have today. At the core of what we call the Columbian Exchange was the trans-Atlantic Slave trade and the sale of slave produced commodities, sugar, tobacco, indigo, rice, and later cotton. Contemporary capitalism with all its institutional supports was the product of slavery. The slave trade led to the development of markets for exchange, regular shipping routes, limited liability corporations, and modern banking and insurance practices.

 

In colonial New Amsterdam and colonial New York enslaved Africans built the infrastructure of the settlement, the roads, the fortifications, the churches, the houses, and the docks. They cleared the fields and dredged the harbors.

 

In the 19th century, because it had a good harbor and was at the Northern edge of the coastal Gulf Stream current, New York became the center for the financing, refining, and transport of slave produced commodities around the world. Sugar from the Caribbean and cotton from the deep South were placed on coastal vessels and shipped to the Port of New York, loaded onto ocean going clipper ships, and then transported to Europe.

 

3) How did enslaved Africans change the landscape of New York City? 

 

If we look at images of Manhattan Island before the coming of Europeans and Africans to the area, it was very different from today. The Wall Street slave market, established by the City Common Council in 1711 was on the corner of Pearl and Wall Street where ships docked at the time. Now the waterfront is two blocks further east. As Africans built the village, they actually expanded the physical city with landfill. We know that Africans built the original Wall Street wall and the subsequent Chambers Street wall which was the northern outpost of the city at the time of the American Revolution. A project by Trinity Church has established that enslaved African labor was used to build the original church and St. Paul’s Chapel where George Washington prayed and which still stands just south of City Hall.

 

4) Once slavery was abolished in New York City, how were African Americans still oppressed financially and politically? 

 

Slavery ended in New York City and State gradually between 1799 and 1827. Essentially Africans were required to pay for their freedom through unpaid labor during this transitional time period. When finally freed, they received no compensation for their labor or the labor of their enslaved ancestors. Once free, even when they were able to acquire land, it was difficult for African Americans to prove ownership or protect their land from government seizure. One of the greatest injustices was the destruction of the largely African American Seneca Village in the 1850s when the city confiscated their land to build Central Park. If they had not been displaced, their land would be worth billions of dollars today. Politically, there were a series of discriminatory laws limiting the ability of African American men to vote; of course no women were allowed to vote.

 

5) Please describe how this financial oppression caused a lack of wealth for generations of African Americans. 

 

Most of the financial injustice and the wealth gap we see today is the product of ongoing racial discrimination with roots in slavery but enacted into federal law in the 1930s and 1940s. The New Deal established the principle that federal programs would be administered by localities, which meant that even when African Americans were entitled to government support and jobs, local authorities could deny it to them. After World War II African American soldiers were entitled to GI Bill benefits, but were denied housing and mortgages by local banks and realtors creating all white suburbs.

 

Originally Social Security was not extended to agricultural and domestic workers, major occupations for Black workers in the 1930s. Social Security benefits are still denied to largely minority domestic and home health care workers who work off of the books. Many jobs held by African Americans were not covered under New Deal labor legislation and Blacks in the South were excluded from the programs like the Civilian Conservation Corps.

 

6) What is redlining and how did it affect African Americans in New York? How did segregation affect financial disparities? How did Jim Crow? Please connect how these practices developed financial disparities between black and white Americans for generations. 

 

Banks and realtors reserved some areas for white homeowners and designated others for Blacks. On Long Island, Levittown had a clause in the sales agreement forbidding the resale or renting to Black families. Blacks were directed to declining areas like the town of Hempstead or areas prone to flooding like Lakeview. I grew up in a working-class tenement community in the southwest Bronx. My apartment building had 48 units and no Black families. This could not have been an accident. The only Black student in my class lived in public housing because his father was a veteran. There were a lot of Black veterans, but very few were sent to our neighborhood. Segregated neighborhoods also meant segregated schools. None of this was an accident.

 

Brooklyn Borough President Eric Adams was heavily criticized for remarks about white gentrifiers from out-of-state who moving into former Black and Latino communities in Brooklyn and Harlem. The mock-outrage misdirected attention away from what is actually taking place. The new gentrifiers are not becoming parts of these communities. They are settlers displacing longtime residents. Partly as a result of a new wave of gentrification, homelessness in the city has reached its highest level since the Great Depression of the 1930s including over a hundred thousand students who attend New York City schools.

 

7) How did a person's zip code affect their quality of life? How does it now?

 

Zip codes were introduced in 1963, so the world I grew up in was pre-Zip Code. 

I think the three biggest impacts of where you grow up are the quality of housing, the quality of education, and access to work. In neighborhoods with deteriorating housing lead poisoning from paint and asthma exacerbated by rodent fecal matter and insect infestations are major problems. Children in these neighborhoods grow up with greater exposure to crime, violence, drug abuse and endemic poverty. Food tends to be lower quality because of the dearth of supermarkets and the abundance of fast-food joints. Schools tend to be lower functioning because teachers have to address all of the social problems in the communities, not just educational skills. For adults, they tend to have greater distances to travel to get to and from work, which essentially means hours of unpaid labor time.

 

8) How does access to education affect financial disparities between white and black communities? 

 

According to a Newsday analysis of Long Island School District funding, Long Island’s wealthiest school districts outspend the poorest districts by more than $6,000 per student. Long Island school districts that spend the most per pupil include Port Jefferson in Suffolk County, where the student population is 87% white and Asian, and Locust Valley in Nassau County, where the student population is 80% white and Asian. On the other end of spending spectrum, the school districts that spend the least per pupil include Hempstead and Roosevelt in Nassau County where the student populations are at least 98% Black and Latino. 

 

In New York City, parent associations in wealthier neighborhoods can raise $1,000 or more per student to subsidize education in their schools. At one elementary school in Cobble Hill, Brooklyn they raise $1,800 per child. Parents in the poorest communities are too busy earning a living to fund raise and too economically stressed to make donations. These dollars pay for a range of educational supplements and enrichments so that students who already have the most goodies at home also get the most goodies at school.

 

9) Why is it important to understand history when trying to understand the financial disparities between black and white communities in the U.S.? 

 

It is too easy in the United States to blame poverty on the poor, their “culture,” and their supposed “bad choices.” When drug abuse was perceived of as an inner-city Black and Latino problem, it was criminalized and the solution was to build more prisons. Now that many white Midwestern and southern states have opioid epidemics, suddenly drug abuse is an illness that society must address.

 

10) What are some specific New York City examples of legislation or culture that impacted the financial divide between black and white families? 

 

In New York urban renewal in the 1960s, that era’s name for gentrification, was also known as Negro removal, a term coined by James Baldwin. The 1949 federal Housing Act, the Taft-Ellender-Wagner Act, Robert Wagner was a New York Senator, provided cities with federal loans so they could acquire land and clear areas they deemed to be slums and then be developed by private companies. In the 1950s, the Manhattantown project on the Upper West Side condemned an African-American community so that developers could construct middle-class, meaning white, housing. Before it was destroyed, Manhattantown was home to a number of well-known Black musicians, writers, and artists including James Weldon Johnson, Arturo Schomburg, and Billie Holiday. Lincoln Center was part of the "Lincoln Square Renewal Project" headed by John D. Rockefeller III and Robert Moses. To build Lincoln Center, the largely African American San Juan Hill community was demolished. It was probably the most heavily populated African American community in Manhattan at that time.

 

 

11) What kind of New Yorker invested in the slave trade and how did that affect their family's wealth for generations? Can you give us a profile of this type of person. What kind of family benefitted? 

 

In the colonial era prominent slaveholders included the Van Cortlandt family, the Morris family, the Livingston family, and the Schuyler family, of which Alexander Hamilton’s wife and father-in-law were members. Major slaveholding families also invested in the slave trade. Francis Lewis of Queens County, who was a signer of the Declaration of Independence was a slave-trader.

 

12) What New York corporations benefited from slavery? Why does this matter today? 

 

Moses Taylor was a sugar merchant with offices on South Street at the East River seaport, a finance capitalist, an industrialist, and a banker. He was a member of the New York City Chamber of Commerce and a major stockholder, board member or officer in firms that later merged with or developed into Citibank, Con Edison, Bethlehem Steel and ATT. Taylor earned a commission for brokering the sale of Cuban sugar in the port of New York and supervised the investment of profits by the sugar planters in United States banks, gas companies, railroads, and real estate. The Pennsylvania Railroad and the Long Island Railroad were built with profits from slave-produced commodities. Because of his success in the sugar trade, Moses Taylor became a member of the board of the City Bank in 1837 and served as its president from 1855 until his death. When he died in 1882, he was one of the richest men in the United States.

 

 

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174246 https://historynewsnetwork.org/article/174246 0
The Cold War New and Old: Architectural Exchanges Beyond the West Recent reports on Chinese and Russian involvement in infrastructural developments in Africa and the Middle East coined the phrase “New Cold War”. Yet when seen from the Global South, the continuities with the 20th century lie less in the confrontation with the West, and more in reviving the collaboration between the “Second” and the “Third” worlds. In my recent book Architecture in Global Socialism (Princeton University Press, 2020) I argue that architecture is a privileged lens to study multiple instances of such collaboration, and their institutional, technological, and personal continuities.  

 

When commentators discuss China’s expanding involvement in Africa, they often point at its continuity with the Cold War period. This includes the “Eight Principles” of Chinese aid, announced by prime minister Zhou Enlaiin 1964 during his visit to Ghana, at the time of China’s widening ideological split with the Soviet Union. Less discussed is the fact that, in spite of this split, these principles closely echoed the tenets of Soviet technical assistance from which China had benefitted and which was under way in Ghana. Like China later, so the Soviet Union offered low-interest loans for the purchase of Soviet equipment, the transfer of technical knowledge to the local personnel, and assurances of mutual benefit and respect for the sovereignty of the newly independent state. 

 

 

Above: Nikita Khrushchev and president Sukarno inspect the model of the National Stadium in Jakarta (Indonesia), 1960. R. I. Semergiev, K. P. Pchel'nikov, U. V. Raninskii, E. G. Shiriaevskaia, A. B. Saukke, N. N. Geidenreikh, I. Y. Yadrov, L. U. Gonchar, I. V. Kosnikova. Private archive of Igor Kashmadze. Courtesy of Mikhail Tsyganov.

 

 

In 1960s West Africa, the Soviet Union and its Eastern European satellites used technical assistance to promote socialist solidarity against the United States and Western Europe. Architects, planners, engineers, and construction companies from socialist countries were instrumental in the implementation of the socialist model of development in Ghana, Guinea, and Mali. It was based on industrialization, collectivization, wide distribution of welfare, and mass mobilization. Until today, many urban landscapes in West Africa bear witness to how local authorities and professionals drew on Soviet prefabrication technology, Hungarian and Polish planning methods, Yugoslav and Bulgarian construction materials, Romanian and East German standard designs, and manual laborers from across Eastern Europe.

 

 

Above: International Trade Fair, Accra (Ghana), 1967. Ghana National Construction Corporation (GNCC), Vic Adegbite (chief architect),  Jacek Chyrosz, Stanisław Rymaszewski (project architects). Photo by Jacek Chyrosz. Private archive of Jacek Chyrosz, Warsaw (Poland)

 

 

Some of these engagements were interrupted by regime changes, as it was the case in Ghana, when the socialist leader Kwame Nkrumah was toppled in 1966. But socialist or Marxist-Leninist regimes were not the only, and even not the main, destinations for Soviet and Eastern European architects and contractors. Their most sustained work, often straddling several decades, took place in countries which negotiated their position across the Cold War divisions, such as Syria under Hafez al-Assad, Iraq under Abd al-Karim Qasim and the regimes that followed, Houari Boumédiene’s Algeria, and Libya under Muammar Gaddafi. By the 1970s Eastern Europeans were invited to countries with elites openly hostile to socialism, such as Nigeria, Kuwait, and the United Arab Emirates.

 

Some countries in the Global South collaborated with Eastern Europe in order to obtain technology embargoed by the West. More often, they aimed at a stimulation of industrial development and at offsetting the hegemony of Western firms. Many of these transactions exploited the differences between the political economy of state socialism and the emerging global market of design and construction services. For example, state-socialist managers used the inconvertibility of Eastern European currencies to lower the costs of their services. In turn, barter agreements bypassed international financial markets when raw materials from Africa and Asia were exchanged for buildings and infrastructures constructed by components, technologies, and labor from socialist countries.

 

Above: State House Complex, Accra(Ghana), 1965. Ghana National Construction Corporation (GNCC),  Vic Adegbite (chief architect), Witold Wojczyński, Jan Drużyński (project architects). Photo by Ł. Stanek, 2012.

 

 

The 1973 oil embargo was a game changer for Eastern European construction export. The profits from oil sales, deposited by Arab governments with Western financial institutions, were lent to socialist countries intent on modernizing their economies. Yet when the industrial leap expected from these investments did not materialize, Hungary, Poland, and East Germany struggled to repay huge loans in foreign currencies. Debt repayment became a key objective for the stimulation of export from many Eastern European countries, in particular as the Soviet Union was increasingly unwilling to subsidize them with cheap oil and gas. Faced with the shrinking markets for their industrial products, Eastern Europeans boosted the export of design and construction services. 

 

By the 1970s, the main destinations of this export were the booming oil-producing countries in North Africa and the Middle East. In Algeria, Libya, and Iraq, state-socialist enterprises constructed housing neighborhoods, schools, hospitals, and cultural centers, as well as industrial facilities and infrastructure, paid in convertible currencies or bartered for crude oil. Eastern Europeans also delivered master plans of Algiers, Tripoli, and Baghdad, and worked in architectural offices, planning administration, and universities in the region. 

 

 

Above: Flagstaff House housing project, Accra(Ghana), 1963. Ghana National Construction Corporation (GNCC), Vic Adegbite (chief architect), Károly (Charles) Polónyi (design architect). Photo by Ł. Stanek, 2012.

 

 

These collaborations were as widespread as they were uneven. Under pressure of state and party leadership to produce revenue in convertible currencies, state-socialist enterprises were highly accommodating to the requests of their North African and Middle Eastern counterparts. In turn, the latter were often constrained by path-dependencies of Eastern European technologies already acquired. 

 

Many would see such transactions as indicative of a shift in socialist regimes from ideology to pragmatism, if not cynicism. But a more complex picture emerges when these transactions are addressed through the lens of architecture, which spans not only economy and technology, but also includes questions of representation, identity, and everyday life. 

 

While largely abandoning the discourse of socialist solidarity, Eastern European architects continued to speculate about the position in the world that they shared with Africans and Middle Easterners. In so doing, they aimed at identifying professional precedents useful for the tasks at hand. For example, the expertise of tackling rural underdevelopment, which had been long claimed by Central European architects as their professional obligation, provided specific planning tools in the agrarian countries of the Global South. In turn, the search for “national architecture” in the newly constituted states in Central Europe after World War I became useful when architects were commissioned to design spaces representing the independent countries in Africa and Asia. 

 

 

Above: Municipality and Town Planning Department, Abu Dhabi(UAE), 1979-85.  Bulgarproject (Bulgaria), Dimitar Bogdanov. Photo by Ł. Stanek, 2015.

 

 

During the Cold War, these exchanges were carefully recorded in North America and Western Europe. But after 1989, they were often forgotten by policy makers, professionals, and scholars in the West. By contrast, buildings, infrastructures, and industrial facilities co-produced by West Africans, Middle Easterners, and Eastern Europeans are still in use in the Global South, and master plans and building regulations are still being applied. Some of the modernist buildings are recognized as monuments to decolonization and independence, while others sit awkwardly in rapidly urbanizing cities.  

 

Memories of collaboration with Eastern Europe are vivid among professionals, decision makers and, sometimes, users of these structures. They result in renewed engagements when, for example, Libyan authorities invite a Polish planning company to revisit its master plans for Libyan cities delivered during the socialist period. These memories speak more about the experience of collaboration that bypassed the West, and less about a Cold War confrontation, old or new.  

 

 

 

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174252 https://historynewsnetwork.org/article/174252 0
Frank Ramsey: A Genius By All Tests for Genius

 

It is hard to get our ordinary minds around the achievements of the great Cambridge mathematician, philosopher, and economist, Frank Ramsey. He made indelible contributions to as many as seven disciplines: philosophy, economics, pure mathematics, mathematical logic, the foundations of mathematics, probability theory, and decision theory. My book Frank Ramsey: A Sheer Excess of Powers, tells the story of this remarkable thinker. The subtitle is taken from the words of the Austrian economist Joseph Schumpeter, who described Ramsey as being like a young thoroughbred, frolicking with ideas and champing at the bit out of a sheer excess of powers.  Or as another economist, the Nobel Laureate Paul Samuelson said ‘Frank Ramsey was a genius by all tests for genius’.

 

Ramsey led an interesting life in interesting times. He began his Cambridge undergraduate degree just as the Great War was ending; he was part of the race to be psychoanalyzed in Vienna in the 1920s; he was a core member of the secret Cambridge discussion society, the Apostles, during one of its most vital periods; as well as a member of the Bloomsbury set of writers and artists and the Guild Socialist movement. He lived his life via Bloomsbury’s open moral codes and lived it successfully. 

 

The economist John Maynard Keynes identified Ramsey as a major talent when he was a mathematics student at Cambridge in the early 1920s. During his undergraduate days, Ramsey demolished Keynes’ theory of probability and C.H. Douglas’s social credit theory; made a valiant attempt at repairing Bertrand Russell’s Principia Mathematica; and translated Ludwig Wittgenstein’s Tractatus Logico-Philosophicus, and wrote a critique of the latter alongside a critical notice of hat still stands as one of the most challenging commentaries of that difficult and influential book.

 

Keynes, in an impressive show of administrative skill and sleight of hand, made the 21-year-old Ramsey a fellow of King’s College at a time when only someone who had studied there could be a fellow. (Ramsey had done his degree at Trinity). 

 

Ramsey validated Keynes’ judgment. In 1926 he was the first to figure out how to define probability subjectively and invented the expected utility that underpins much of contemporary economics. Beginning with the idea that a belief involves a disposition to act, he devised a way of measuring belief by looking at action in betting contexts. But while Ramsey provided us with a logic of partial belief, he would have hated the direction in which it has been taken. Today the theory is often employed by those who want to understand decisions by studying mathematical models of conflict and cooperation between rational and self-interested choosers. Ramsey clearly believed it was a mistake to think that people are ideally rational and essentially selfish. He also would have loathed those who used his results to argue that the best economy is one generated by the decisions of individuals, with minimal government intrusion. He was a socialist who favored government intervention to help the disadvantaged in society. 

 

In addition to his pioneering work on decisions made under uncertain conditions, Ramsey with encouragement from Keynes wrote two pathbreaking papers for the latter’s Economic Journal. The first, “A Contribution to the Theory of Taxation” founded the sub-field of optimal taxation and laid the foundation for the field of macro-public finance, so much so that any research problem about optimal monetary or fiscal government policy is now called a Ramsey Problem. The second, “Mathematical Theory of Saving” founded the field of optimal savings by trying to determine how much a nation should save for future generations. This work on intergenerational justice has been expanded and improved upon by economic luminaries such as Kenneth Arrow, Partha Dasgupta, Tjalling Koopmans, and Robert Solow. As Ramsey suggested, the theory has been applied not only to income, but also to exhaustible resources such as the environment, resulting in yet another new sub-discipline in economics called ‘optimal development’. 

 

Ironically, Ramsey told Keynes that it was a waste of time to write these papers, as he was preoccupied with much more difficult work in philosophy, and didn’t want to be distracted. 

 

His contributions in the latter field were enormous. Ramsey was responsible for one of the most important shifts in the history of philosophy. He had a profound influence on Ludwig Wittgenstein, persuading him to drop the quest for certainty, purity, and sparse metaphysical landscapes in the Tractatus and turn to ordinary language and human practices. Wittgenstein is one of the most influential philosophers in the history of the discipline and the shift from his early to his later position, caused by Ramsey, is one of the major signposts in the contemporary philosophical landscape.

 

More importantly, his own alternative philosophical views are still being mined for gems. Ramsey’s theory of truth and knowledge is the very best manifestation of the tradition of American pragmatism. His most illustrious contemporaries in philosophy—Bertrand Russell, G.E. Moore, the early Wittgenstein, and the members of the Vienna Circle—sought to logically analyze sentences so that the true ones would mirror a world independent of us. In contrast, Ramsey was influenced by C.S. Peirce, the founder of American pragmatism, who characterized truth in terms of its place in human life. When Ramsey died, he was in the middle of writing a book that is only now starting to be appreciated for its unified and powerful way of understanding how all sorts of beliefs are candidates for truth and falsity, including counterfactual conditionals and ethical beliefs. His general stance was to shift away from high metaphysics, unanswerable questions, and indefinable concepts, and move towards human questions that are in principle answerable. His approach, to use his own term, was ‘realistic’, rejecting mystical and metaphysical so-called solutions to humanity’s deepest problems in favor of down-to-earth naturalist solutions. 

 

Although Ramsey was employed by Cambridge as a mathematician, he only published eight pages of pure mathematics. But those eight pages yielded impressive results. He had been working on the decision problem in the foundations of mathematics that David Hilbert had set in 1928. It called for an algorithm to determine whether or not any particular formula is valid or true on every structure satisfying the axioms of its theory. Ramsey solved a special case of the problem, pushed its general expression to the limit, and saw that limit very clearly. Shortly after his death, in one of the biggest moments in the history of the foundations of mathematics, Kurt Gödel, Alonzo Church and Alan Turing demonstrated that the general decision problem was unsolvable. But a theorem that Ramsey had proven along the way, a profound mathematical truth now called Ramsey’s Theorem, showed that in large but apparently disordered systems, there must be some order. That fruitful branch of pure mathematics, the study of the conditions under which order occurs, is called Ramsey Theory. 

 

His work in mathematics and philosophy is only the tip of the iceberg. A query sent out to Twitter, asking for innovations named after Ramsey, produced an astonishing nineteen items. Most of them are technical and would take an article of their own to explain. One, however, is easily accessible. In 1999, Donald Davidson, a leading philosopher of the twentieth century,  coined the term ‘the Ramsey Effect’: the phenomenon of discovering that an exciting and apparently original philosophical discovery already has been presented, and presented more elegantly, by Frank Ramsey. 

 

Ramsey did all this, and more, in an alarmingly short lifespan. He died at the age of 26 probably from leptospirosis (bacteria from the feces of animals) contracted by swimming in the river Cam.

 

His death made his friends and family (including his brother Michael, who later became Archbishop of Canterbury) question the meaning of life. Ramsey had something to say about that too. His poignant remarks in 1925 on the timeless problem of what it is to be human are just as important as his technical work:

 

My picture of the world is drawn in perspective, and not like a model to scale. The foreground is occupied by human beings and the stars are all as small as threepenny bits. … I apply my perspective not merely to space but also to time. In time the world will cool and everything will die; but that is a long time off still, and its present value at compound discount is almost nothing. Nor is the present less valuable because the future will be blank. Humanity, which fills the foreground of my picture, I find interesting and on the whole admirable. I find, just now at least, the world a pleasant and exciting place. You may find it depressing; I am sorry for you, and you despise me. But [the world] is not in itself good or bad; it is just that it thrills me but depresses you. On the other hand, I pity you with reason, because it is pleasanter to be thrilled than to be depressed, and not merely pleasanter but better for all one’s activities.

 

Unlike his friends Russell and Wittgenstein who focused on the vastness and the unknowability of the world, Ramsey believed it was more important to concentrate on what is admirable and conducive to living a good life. Rather than focus on a ‘oneness’ or God, like his brother Michael, he thought the good life was to be found within our human, fallible, ways of being.

 

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174250 https://historynewsnetwork.org/article/174250 0
How Religious History Helps Us Understand Russia's War Against Ukrainian Independence

 

In recent months, the ongoing Russian-Ukrainian multidimensional military, political, economic and cultural conflict has become an issue with which most people around the world are now familiar with. What is less well known beyond Eastern Europe is the important religious aspect of Moscow’s “hybrid war” against Ukrainian national independence. According to its Constitution, Ukraine is, to be sure, a secular country where churches and all religious organizations are separated from the state. On the other hand, Ukraine is one of the most religious countries in Europe, and has a Christian church whose history reaches back more than a millennium. 

 

The origins of Ukrainian Orthodoxy date back to the Middles Ages when Prince Volodymyr the Great of Kyiv received Christianity from Constantinople in 988. The baptism of the Kyivan Rus was one of the crucial events in early Ukrainian history. It helped create a first proto-national community out of which today’s modern Ukrainian nation would later emerge. 

 

Eastern Christianity heavily influenced the rise of the first Eastern Slavic state, the Kyivan Rus, on the territories of today central and northern Ukraine, eastern Belarus as well as western Russia. Later on, Orthodoxy in Eastern Europe transformed, however, from a medium of cultivation and unification into an instrument of domination and subversion. Increasingly reimagining itself as an empire, Tsarist Russia used the Orthodox church to justify and implement its control over Ukraine and Belarus. 

 

Against this background, the last almost three decades since Ukraine’s independence were marked by a fight of the Ukrainian Orthodox Church of the Kyiv Patriarchate (UOC KP) and Ukrainian Autocephalous Orthodox Church (UAOC) against the dominance of the Ukrainian Orthodox Church of the Moscow Patriarchate (UOC MP) in Ukraine. This confrontation has political undercurrents as the UOC MP is – in spite of its official name – de facto a branch of the Russian Orthodox Church (ROC) which, in turn, is a manifestly national (rather than pan-national) church that is unofficially, but closely linked to the Kremlin. As such, the ROC and UOC MP were and are today important soft-power instruments in the Kremlin’s hybrid warfare against Ukraine. They are a major medium for Moscow’s foreign policy and facilitate the Kremlin’s neo-imperial schemes under such headings as “Orthodox civilization,” “Russian World,” or “Eastern Slavic brotherhood.” 

 

In 2014, the so-called “Ukraine crisis” began. This is the common, but misleading label for the war that broke out as a result of Russia’s illegal annexation of Crimea and covert intervention in the Donets Basin. Since then, the question of religious independence from Russia has become more pressing than ever for many Ukrainians. As a result of prolonged negotiations, in January 2019, the Ecumenical Patriarch of Constantinople Bartholomew I. handed to a Ukrainian delegation in Istanbul a so-called Tomos (literally: little book), i.e. an official document that grants canonical independence to the newly-established unified Orthodox Church of Ukraine (OCU). This was a major achievement not only for Ukraine’s religious autonomists. It was also a historic success of the Presidency of Petro Poroshenko whose team had, since 2016, done most of the diplomatic work in preparation of Constantinople’s momentous move. 

 

The Russian reaction to this historic act was expectedly vitriolic and full of conspirology. Already before the finalization of Constantinople’s move in early 2019, among many others, Patriarch Kirill of the Russian Orthodox Church condemned, in late 2018, with anger and hyperbole Ukraine’s forthcoming autocephaly: “The concrete political goal was well-formulated by, among others, plenipotentiary representatives of the United States in Ukraine and by representatives of the Ukrainian government themselves: it is necessary to tear apart the last connection between our people [i.e. the Russians and Ukrainians], and this [last] connection is the spiritual one. We should make our own conclusions [concerning this issue] including on the tales which [the West], for a long time, tried to impose on us, during so many years, about the rule of law, human rights, religious freedom and all those things which, not long ago, were regarded as having fundamental value for the formation of the modern state and of human relations in modern society. Ukraine could become a precedent and example for how easily one can do away with any laws, with any orders [and] with any human rights, if the mighty of this world need it.”

 

The new Metropolitan of the Orthodox Church of Ukraine (OCU) Epiphanius responded to these and many other Russian attacks as he said that the “the Russian Orthodox Church is the last advance post of Vladimir Putin in Ukraine,” and that the “appearance of the OCU undercuts the imperial goals of the Kremlin leader. Putin is losing here in Ukraine the support which he had before because if he had not had this support, there would not have been a war in the Donbas. And therefore, we will consistently maintain ourselves as a single church – recognised and canonical in Ukraine. And gradually Russia will lose this influence through the souls of Orthodox Ukrainians here."

 

To be sure, the acquisition of canonical independence of the newly established Orthodox Church of Ukraine was not only a church matter and source of division between Russia and Ukraine. It also played a role in Ukrainian domestic affairs, and, in particular, in the Ukrainian presidential elections of 2019. On the day of Epiphanius’s enthronization on February 2, 2019, then President Petro Poroshenko stressed that the OCU is and will be independent of the state. At the same time, he stated that “the church and the state will now be able to enter onto a path toward genuine partnership of the church and state for joint work for the good of the country and the people."

 

Representatives of the new OCU repeatedly assured that the state does not meddle in religious affairs, but merely contributed to the unification process. Yet, former President Poroshenko actively presented Ukraine’s acquisition of autocephaly as his political victory vis-à-vis Russia during his 2019 election campaign, and even went on a so-called Tomos-tour through Ukraine. While such manifest political instrumentalization spoiled the acquisition of Ukrainian autocephaly, the OCU’s independence is not a mere side-product of political maneuvering by Ukraine’s former President. It is the result of a decades long struggle of many Ukrainian Christians against the dominance of the UOC-MP and of aspirations of many Orthodox believers in Ukraine.

 

According to American theologian Shaun Casey, Ukraine’s Tomos, i.e. her obtainment of autocephaly for her Orthodox church, will lead to unification around the OCU and give new opportunity to deal with religious diversity. Among others, Archimandrite Cyril Hovorun has emphasized that Ukraine’s acquisition Constantinople’s Tomos is a move that corresponds to the general structure of the world-wide religious Orthodox community, and national character of the individual Eastern Christian churches. Unlike in the centralized and pyramidal structure of the Catholic Church with the Pope at its top, Orthodoxy is divided into local churches and constitutes an international Commonwealth rather than unified organization.

 

The ongoing dominance of an Orthodox Church subordinated to Moscow rather than Kyiv on the territory of independent Ukraine had thus always been an anomaly. It became an absurdity once Russia started a war against Ukraine in 2014. Therefore, Ukraine’s acquisition of autocephaly for its Orthodox church can be viewed as an opportunity to heal the schism between the various Eastern Christian communities on Ukrainian territory, and to eventually unite most Orthodox believers living in Ukraine. 

 

For that to happen, the international recognition by other Orthodox churches is crucial as it legitimizes the young OCU in the Eastern Christian world. So far, only the Patriachate of Alexandria and the Standing Synod of the Church of Greece have officially recognized the canonical independence of the OCU. Even these were contested decisions. The former Greek Defence Minister Panos Kammenos called it a crime: “If anything happens in the next few months, the Holy Synod [of the Greek Orthodox Church] will hold all responsibility for the termination of guarantees granted by Russia, due to the recognition of the illegal Church of Ukraine.” 

 

In contrast to Greece’s Holy Synod, the Serbian Orthodox Church, a close ally of the ROC, has made publicly clear that it will not recognize the OCU. It follows Moscow’s line when claiming that “the Kyiv-based Metropolia cannot be equated with current Ukraine as it has been under the jurisdiction of the Moscow Patriarchate since 1686." Events in Ukraine have gained additional meaning in the Western Balkans as Montenegro – NATO’s most recent new member – is currently discussing a contentious religious bill that enables the state to confiscate property of the Serbian Orthodox Church. The latter has, in response, blamed Kyiv for this development: “It appears that recent events in Ukraine, where the previous authorities and Constantinople Patriarchate legalized the schism, are currently repeated in Montenegro. Schismatics should confess and achieve reconciliation with the Serbian Orthodox Church." 

 

Reacting to recent developments in Ukraine, Greece and former Yugoslavia, Moscow’s Patriarch Kirill warns now that “new work will now be done to strengthen Orthodoxy's canonical purity, and even greater efforts made to preserve and restore unity where this has been shaken." OCU’s Metropolitan Epiphanius at Kyiv, in contrast, predicts that, in the near future, “at least three or four more churches will recognize our autocephaly.”   

Such radical statements are due to the fact that the OCU’s independence has the potential to change the balance of influence in the entire Orthodox world. Representatives of several other Christian and non-Christian religions have welcomed the emergence of a canonical and independent Ukrainian Orthodox church in 2019. The partly harsh rejection of the OCU by a number of Russian and pro-Russian Orthodox hierarchs has largely to do with the Moscow-dominated power relations in the international network of Eastern Christian churches that are under threat. The emergence of a potentially large competitor in Eastern Europe could encourage certain other local churches currently under the Moscow Patriarchy to follow Ukraine’s example. 

 

Religion will remain an important factor in the ongoing conflict between Russia and Ukraine, and divide world-wide Orthodoxy as long as Moscow does not recognize Ukrainian autocephaly. The emergence of the OCU and its growing recognition among other local Orthodox churches will impact profoundly the post-Soviet and other regions of the world. It will probably provoke Moscow towards even harsher actions as the Kremlin is gradually losing a vital instruments of its hybrid warfare against Ukraine. While autocephaly has been an aim for many Ukrainian Christians for centuries, Constantinople’s 2019 Tomos for the OCU is perceived, in- and outside Ukraine, as a highly symbolic answer to the Russian military attack on Ukraine – an aggression of one largely Orthodox people against another. Against this geopolitical background, the OCU’s acquisition of autocephaly undercuts the crypto-imperial mood in the Moscow Patriarchy.

 

This article is an outcome of a project within the 2018-2019 Democracy Study Center training program of the German-Polish-Ukrainian Society and European Ukrainian Youth Policy Center, in Kyiv, supported by the Foreign Office of the Federal Republic of Germany. #CivilSocietyCooperation. Umland's work for this article benefited from support by "Accommodation of Regional Diversity in Ukraine (ARDU): A research project funded by the Research Council of Norway (NORRUSS Plus Programme)." See blogg.hioa.no/ardu/category/about-the-project/.

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174251 https://historynewsnetwork.org/article/174251 0
JFK's New Hampshire Primary Hope Resonates Today

 

The day before the 1960 New Hampshire presidential primary, candidate John F. Kennedy talked about America's great hope for disarmament. Speaking at the University of New Hampshire, JFK said "No hope is more basic to our aspirations as a nation" than disarmament. JFK exclaimed "there is no greater defense against total nuclear destruction than total nuclear disarmament."  It's vital our presidential candidates today also share this goal of JFK.  Despite previous arms control treaties there are still 14,000 nuclear weapons in the world, most held by the U.S. and Russia. The danger of a new, expensive arms race looms large.   JFK believed that the U.S., Russia and other nuclear powers have a common interest in disarmament. In his speech JFK noted "that funds devoted to weapons of destruction are not available for improving the living standards of their own people, or for helping the economies of the underdeveloped nations of the world." Nuclear spending fosters instability at home and abroad by stealing resources from the impoverished.   We need this same type of thinking as we negotiate progress toward nuclear disarmament. But sadly, treaties are being rolled backed by the Trump administration, furthering the nuclear danger.  We need to extend the New START treaty achieved by President Obama, which limits deployed strategic nuclear weapons for Russia and the U.S. We don't want to risk the possibility of having no arms control treaty with Russia in place.  But Trump has been stalling in renewing the treaty, despite most everyone urging him to do so.  Lt. Gen. Frank Klotz says "The most prudent course of action would be to extend New START before it expires in 2021 and thereby gain the time needed to carefully consider the options for a successor agreement or agreements and to negotiate a deal with the Russians.”  Extending New START also takes on extra meaning right now because of Trump's withdrawal from the INF Treaty with Russia, which has escalated nuclear dangers. The treaty, achieved by President Ronald Reagan, had eliminated short and medium range nuclear missiles.  Kennedy said in his speech that disarmament would take "hard work." We clearly have to work harder at  diplomacy today, which was our main tool in controlling the nuclear threat during the Cold War.  The Trump administration can start by ratifying the long overdue Comprehensive Nuclear Test Ban Treaty, which bans all nuclear test explosions.  President Dwight Eisenhower first pursued negotiations on a nuclear test ban with the Soviets during the Cold War. Kenned continued Ike's efforts and achieved the great breakthrough of the Limited Nuclear Test Ban Treaty of 1963. This treaty banned nuclear tests in the atmosphere, underwater, and outer space. It came just one year after the Cuban Missile Crisis brought the Soviets and the U.S. to the brink of nuclear war. But underground tests continued.  So we need to finish the job Ike and JFK started and finally ratify the Comprehensive Nuclear Test Ban Treaty. All Trump has to do is pick up the phone and ask the Senate to finally ratify. We should encourage North Korea and China to ratify as a confidence building measure toward nuclear disarmament in Asia.   We also need to convey the wastefulness of nuclear spending.  The Congressional Budget office warns "The Administration’s current plans for U.S. nuclear forces would cost $494 billion over the 2019–2028 period—$94 billion more than CBO’s 2017 estimate for the 2017–2026 period, in part because modernization programs continue to ramp up."  Daryl Kimball of the Arms Control Association says we need extend New START and then build more arms reduction treaties to start cutting nuke costs.  Think of how tens of billions of dollars each year are going to be poured into nuclear weapons. Then think of all the different ways that money could be spent to better society. Those nuke dollars could feed the hungry, cure cancer and other diseases, improve education and infrastructure.  The World Food Program estimates that 5 billion a year could feed all the world's school children, a major step toward ending global hunger. We should fund that noble peace initiative instead of nukes.  The candidates for president must take up the cause of nuclear disarmament. 

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174244 https://historynewsnetwork.org/article/174244 0
The Unique Struggles of Women and Native Americans to Vote

 

Wealthy white men have had the right to vote in America since the beginning of our republic. It’s been a very, very different story for women and Native Americans. 

Women’s voting rights took a long time. Native Americans’ took longer. 

The struggle for women’s voting rights began in April 1776, when 32-year-old Abigail Adams sat at her writing table in her home in Braintree, Massachusetts, a small town a few hours’ ride south of Boston. 

The Revolutionary War had been going on for about a year. A small group of the colonists gathered in Philadelphia to edit Thomas Jefferson’s Declaration of Independence for the new nation they were certain was about to be born, and Abigail’s husband, John Adams, was among the men editing that document. 

Abigail had a specific concern. With pen in hand, she carefully considered her words. Assuring her husband of her love and concern for his well-being, she then shifted to the topic of the documents being drafted, asking John to be sure to “remember the Ladies, and be more generous and favourable to them than [were their] ancestors.”

Abigail knew that the men drafting the Declaration and other documents leading to a new republic would explicitly define and extol the rights of men (including the right to vote) but not of women. She and several other well-bred women were lobbying for the Constitution to refer instead to persons, people, humans, or “men and women.” 

Her words are well preserved, and her husband later became president of the United States, so her story is better known than those of most of her peers. 

By late April, Abigail had received a response from John, but it wasn’t what she was hoping it would be. “Depend on it,” the future president wrote to his wife, “[that we] know better than to repeal our Masculine systems.”

Furious, Abigail wrote back to her husband, saying, “If perticular [sic] care and attention is not paid to the Ladies, we are determined to foment a Rebellion.” 

Abigail’s efforts were unrewarded. 

Adams, Jefferson, Hamilton, and the other men of the assembly wrote, “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the Pursuit of Happiness. That to secure these rights, Governments are instituted among Men, deriving their just Powers from the Consent of the Governed . . . ” 

The men had won. 

At that time, a married woman couldn’t make out a will because she couldn’t independently own property. Her husband owned anything she’d brought into the marriage. If he died, a man appointed by a court would decide which third of her husband’s estate she could have and how she could use it, and he would supervise her for the rest of her life or until she remarried. A woman couldn’t even sue in court, except using the same laws applied to children and the mentally disabled with a male executor in charge. 

And, for sure, a woman couldn’t vote. 

The Generational Fight for Womens Suffrage 

Nearly a hundred years later, things hadn’t changed much. Susan B. Anthony went to her ward’s polling station in Rochester, New York, on November 1, 1872, and cast a vote. 

Justifying her vote on the grounds of the 14th Amendment, Anthony wrote, “All persons are citizens—and no state shall deny or abridge the citizen rights.”

Six days later, she was arrested for illegally voting. The judge, noting that she was female, refused to allow her to testify, dismissed the jury, and found her guilty. 

A year later, in the 1873 Bradwell v. State of Illinois decision, concerning the attempt of a woman named Myra Bradwell to practice law in Illinois, the US Supreme Court ruled that women were not entitled to the full protection of persons for purposes of voting or even to work outside the home. 

Justice Joseph P. Bradley wrote the Court’s concurring opinion, which minced no words: “The family institution is repugnant to the idea of a woman adopting a distinct and independent career from that of her husband. So firmly fixed was this sentiment in the founders of the common law that it became a maxim of that system of jurisprudence that a woman had no legal existence separate from her husband, who was regarded as her head and representative in the social state.”

After another 50 years, suffragettes eventually won the right to vote with the passage of the 19th Amendment in 1920. But burdensome laws, written and passed mostly by men, continue to oppress women to this day. These include voter suppression laws that hit women particularly hard in Republican-controlled states. 

Those states, specifically, are the places where “exact match” and similar ALEC-type laws have been passed forbidding people to vote if their voter registration, ID, or birth certificate is off by even a comma, period, or single letter. The impact, particularly on married women, has been clear and measurable. As the National Organization for Women (NOW) details in a report on how Republican voter suppression efforts harm women: 

Voter ID laws have a disproportionately negative effect on women. According to the Brennan Center for Justice, one third of all women have citizenship documents that do not identically match their current names primarily because of name changes at marriage. Roughly 90 percent of women who marry adopt their husband’s last name. That means that roughly 90 percent of married female voters have a different name on their ID than the one on their birth certificate. An estimated 34 percent of women could be turned away from the polls unless they have precisely the right documents.

MSNBC reported in a 2013 article titled “The War on Voting Is a War on Women, “[W]omen are among those most affected by voter ID laws. In one survey, [only] 66 percent of women voters had an ID that reflected their current name, according to the Brennan Center. The other 34 percent of women would have to present both a birth certificate and proof of marriage, divorce, or name change in order to vote, a task that is particularly onerous for elderly women and costly for poor women who may have to pay to access these records.” The article added that women make up the majority of student, elderly, and minority voters, according to the US Census Bureau. In every category, the GOP wins when women can’t vote. 

Silencing and Suppressing Native Voices 

Republicans generally are no more happy about Native Americans voting than they are about other racial minorities or women. Although Native Americans were given US citizenship in 1924 by the Indian Citizenship Act, that law did not grant them the right to vote, and their ability to vote was zealously suppressed by most states, particularly those like North Dakota, where they made up a significant share of the nonwhite population. 

Congress extended the right to vote to Native Americans in 1965 with the Voting Rights Act, so states looked for other ways to suppress their vote or its impact. Gerrymandering was at the top of the list, rendering their vote irrelevant. But in the 2018 election, North Dakota took it a step further. 

Most people who live on the North Dakota reservations don’t have separate street addresses, as most tribes never adopted the custom of naming streets and numbering homes. Instead, people get their mail at the local post office, meaning that everybody pretty much has the same GPO address. Thus, over the loud objections of Democratic lawmakers, the Republicans who control that state’s legislature passed a law requiring every voter to have his or her own unique and specific address on his or her ID.

Lots of Native Americans had a driver’s license or even a passport, but very few had a unique street address. When the tribes protested to the US Supreme Court just weeks before the election, the conservatives on the Court sided with the state.

In South Dakota, on the Pine Ridge Reservation, the Republican-controlled state put polling places where, on average, a Native American would have to travel twice as far as a white resident of the state to vote. And because that state’s ID laws don’t accept tribal ID as sufficient to vote, even casting an absentee ballot is difficult. 

Although the National Voter Registration Act of 1993, also known as the Motor Voter Act, explicitly says that voting is a right of all US citizens, that part of that law has never been reviewed by the Supreme Court and thus is largely ignored by most GOP-controlled states. As a result, you must prove your innocence of attempted voting fraud instead of the state proving your guilt. 

Reprinted from The Hidden History of the War on Voting with the permission of Berrett-Koehler Publishers. Copyright © 2020 by Thom Hartmann. 

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174249 https://historynewsnetwork.org/article/174249 0
Manipulation of Spanish History Has Manufactured A Constitutional Crisis

 

Spain is facing an existential crisis. Catalonia, one of its seventeen constituent regions, has a government supported by a small majority that seeks to be fully independent from Spain. Because Spain had not been able to form a permanent government at the national level without support from the Catalan separatists in parliament, the Socialist prime minister had to promise to allow a referendum to determine if Catalonia would gain independence from the rest of Spain. Most Spaniards oppose such a referendum. The entire country will very likely fall into a state of civil war if the referendum is granted. The roots of this dilemma lie in the manufactured crisis brought on by warring nationalist groups with competing narratives of Spain’s past. 

 

Spanish nationalists, those descended from the Fascists who supported Spanish dictator Francisco Franco, always ignore the cultural diversity that characterizes Spain. Catalan nationalists, descended from a group of anti-Francoists, pretend that Catalonia is and has always been so radically distinct from the rest of Spain that it cannot possibly tolerate political union with the rest of the country. 

 

Both sides want to seize control by destroying Spain’s diversity. Spanish nationalists seek to undo the current political status quo---a unitary nation that has given significant power to its regions, called autonomous communities. Instead, they want to create a single, unified Spanish (Castilian) identity upon all ethno-linguistic minorities in the country. Catalan nationalists also want to impose a single cultural and political identity within a completely independent nation where ethno-linguistic minorities such as non-white Latin Americans and Castilian-speaking Spaniards would be subordinated to Catalan cultural and political power. 

 

Both sides use history to make their case but in each case their reading of history is flawed. The truth is more complicated. Spain is an accident in that until 1640, it was not obvious that Portugal was any different from the rest of the Iberian Peninsula. Just like India, Bangladesh, and Pakistan used to be simply called India, Portugal used to be part of what was called Hispania, then the name of the entire Iberian Peninsula.

 

During the Roman era, Latin spread throughout most of the Iberian Peninsula except for the northern region where it failed to take root due to strong Basque resistance. When the Roman Empire collapsed in the Latin West before 476, the Visigoths and Suebi established kingdoms in what was then called Hispania. The Suebi influenced Latin speakers in the western part of the Peninsula, while the Visigoths influenced the southern, central, and eastern portions of the Peninsula. The Basques managed to survive in the northeastern part of Hispania. Thus, before the Islamic conquest ended the earliest phase of the Middle Ages on the Iberian Peninsula in the years after 711, there were three major groups on the Peninsula:Hispano-Romans with more Visigothic than Suebic influence, Hispano-romans with more Suebic than Visigothic influence, and non-Latinized Basques.

 

When Muslims from North Africa conquered the Peninsula, a few areas remained either independent or quickly became independent form Muslim rule. Most of the free regions were partially or completely Basque, with one key exception. Theone Latinate region freed from Muslim rule early on was Asturias. Today’s Spain largely derives from its politically and cultural legacy of Asturias, which became a kingdom less than a generation after the invasion of the Iberian Peninsula.

 

Asturias developed out of the Hispano-Roman/Visigothic tradition. When it liberated Galicia from Muslim control in 739, it was joined by a major group from the Hispano-Roman/Suebic cultural sphere. The Galicians helped reconquer and colonize what became Portugal. As a result, Portugal and Galicia have more in common with each other than either does to the rest of the Peninsula. The differences between the Suebic-influenced cultures (Galicia and Portugal) and Visigothic-influences cultures like the Asturias were and are fairly minimal and certainly do not justify Portuguese independence.

 

Around 790, the Franks (a Germanic people who deeply influenced both France and Germany) intervened during the re-conquest (or Reconquista) of Hispania. They established the Marca Hispanica, which included the Christian areas of what is now Catalonia, increased Christian control in certain areas on the peninsula and established the basic building blocks of the northeastern and north-central regions of medieval Hispania---the counties of Aragon and what became known as the Catalan counties. Catalan nationalists claim that the Frankish influence and the nature of the Catalan counties made Catalonia irreconcilably different from the rest of Hispania, but this a late modern fudge and exaggerates small differences between Aragon and the Catalan counties. The Catalans are not merely Frankish or French as some nationalists have implied but strongly linked to the Hispano-Romans. 

 

In sum, Frankish intervention, Basque resistance, and the continued Reconquista fought between Christian and Muslim states, with Latin speakers on both sides, led to a mishmash of states and cultures in Hispania. As the Christians of Hispania reclaimed the Peninsula, the Ibero-Romance languages came to dominate the whole peninsula, except the Basque areas. By 1492, Islam was defeated in the Peninsula and most of the disparate cultures and territories were united under the personal union of two monarchs, Ferdinand and Isabella. This union eventually led to what we call Spain. Spain ruled over Portugal from 1580-1640. The Portuguese gained their independence by force.

 

Culturally and historically then, the Iberian Peninsula is one unit. Only Portugal never was consolidated into Spain. While Catalans are different from Castilians, as are Aragonese, Asturians, and Galicians from each other. 

 

The history of Spain does not support the Spanish nationalist claim that Castilian is the natural language and culture for all people of the Peninsula. Castilian evolved in lands between Asturias and the Muslim heartland and became dominant through luck, politics, and force of arms. However, Castilian was not the dominant language of much of Spain until the last three centuries. More importantly, Castilian was the choice of the peoples on which it was imposed. Today, it is impossible to divide Spain according to “irreconcilable cultural differences” without giving in to hate, xenophobia and racism. It is also impossible to rightly impose one culture, one language, one ethnicity upon all Spaniards. Only diversity and inclusion can save a united Spain and the only form of government that can constitutionally support inclusion and diversity is federalism. The alternative is chaos. 

]]>
Sat, 29 Feb 2020 07:16:28 +0000 https://historynewsnetwork.org/article/174254 https://historynewsnetwork.org/article/174254 0