Showing posts with label technological singularity. Show all posts
Showing posts with label technological singularity. Show all posts

Tuesday, March 12, 2019

AOC/socialist ignorance of how capitalism works. Also: automation, philosophy, etc.

Part of me says that it would be to kalon (for the sake of the noble/fine/beautiful) not to speak ill of anyone, an idea that Ben Franklin apparently implemented in his life.  Another part of me says that willful evil should be called out for what it is:

Congresswoman (D-NY) Alexandria Ocasio-Cortez (AOC) is more clever than wise, and not particularly clever at that; she has some facility with the one-liner aimed more toward the viscera than the mind.  How dare the USA, the richest country in the world, let people go homeless.  See?  Just whittle down a morally-socioeconomically-politically complicated issue into a twitter-length soundbite and pretend to have a superior moral compass, as though it's some morally-obtuse conglomeration of usually-rich people on the Other Side that has decided that people will go homeless.  But this is the vision of an infantile mind in search of heroes (the oppressed) and villains (the rich/powerful), not of someone of the intellectual seriousness naturally expected of a congressperson.

Think I'm exaggerating?  Have a look at these latest comments from AOC, at last directly and explicitly taking on the Great Satan of the leftist worldview, capitalism:

"Capitalism is an ideology of capital — the most important thing is the concentration of capital and to seek and maximize profit,” Ocasio-Cortez said. "And that comes at any cost to people and to the environment, she said, according to Bloomberg News, “so to me capitalism is irredeemable."
Although the self-described Democratic socialist stopped short of saying capitalism should be scrapped altogether, she explained that "we're reckoning with the consequences of putting profit above everything else in society. And what that means is people can't afford to live. For me, it's a question of priorities and right now I don't think our model is sustainable."
The congresswoman, who unseated a 10-term Democratic incumbent in a district spanning parts of Queens and the Bronx, said Democratic Socialism is more about the rights of workers than explicitly government-run industries. 
"It’s just as much a transformation about [sic] bringing democracy to the workplace so that we have a say and that we don’t check all of our rights at the door every time we cross the threshold into our workplace," she reportedly said. "Because at the end of the day, as workers and as people in society, we’re the ones creating wealth."
...
“We should be working the least amount we’ve ever worked, if we were actually paid based on how much wealth we were producing,” she said. “But we’re not. We’re paid on how little we’re desperate enough to accept. And then the rest is skimmed off and given to a billionaire.”

So let's start from the top.  "Capitalism is an ideology of capital...."  This is evidence that AOC never bothered to study the case for capitalism as made by its actual defenders, but rather had read a lot of leftist literature that characterized (i.e., caricatured) capitalism without running it by the defenders for review/comment/correction; she has in mind some bogeyman version of "capitalism" that stresses profits over people.  That is actually very typical of the way socialist and anticapitalist literature has been done ever since the days of Marx, the standard-bearer of anticapitalist polemics.  That capitalism is an ideology of capital and capital-accumulation could be taken directly from the pages of Marx.

If, on the other hand, one were to actually consult capitalism's defenders, one would find reference to the right of private property in the means of production (a misleading phrase if the mind/intellect isn't counted among those means, as Rand in particular counted it) being a necessary material consequent of individual rights, with Lockean property theory as a key historical predecessor to modern capitalist rights theory.  (Note that the typical leftist approach to Rand is to caricature her defense of capitalism and individualism, taking great care to ignore or misunderstand her fundamental identifications about the role of the mind/intellect in human existence.  If a leftist had ever addressed the role-of-the-mind theme central to her thought head on, I'm not aware of that happening.)  The role of capital and capital accumulation is a consequence of the basic idea that an individual's productive activities should ultimately be a matter of that individual's own decision-making, and that includes the translation of that individual's self-initiated mental/intellectual activities into its material consequences.

Without that basic idea involved, one's picture about the nature, logic, and ultimately moral defense of capitalism is going to be distorted.  Certainly, if one fails to include in the category of labor the value-added from mental/intellectual activity, one will misconceive of who generates the value of the resulting material products, and who rightly benefits from that value-creation.  The category of capital as economists define it, has to do with the human-created tools that improve the productivity of labor.  Land (or natural resources) is a pre-existing factor of production that by itself does not generate value-added.  Physical labor by itself has been a factor of production in human history (and prehistory) the value-added of which had been close to static throughout history until the Industrial Revolution, after which living standards skyrocketed.  (Despite all the outcry from socialists/leftists about working conditions in early industrial England, the population of England skyrocketed during that period; life expectancy also rose.  But capitalism isn't about what benefits people, right?)

One key factor of production mentioned in the "labor" and "capital" links in the paragraph above, that does specifically involve the role of the mind/intellect, is entrepreneurship, a factor that leftists seem to appreciate or understand the least.  But an entrepreneur does often have to secure financing from capitalists, and it's these capitalists who are treated as the primary villains parasitical on the wealth-creation processes they're financing.  Or, at least it's people acting in the capacity of capitalists ("qua capitalists") who are parasitical and non-value-added-generating.  There are, after all, entrepreneurs who are also capitalists, Bezos being an example (being CEO of Amazon as well as roughly 1/5 shareholder).  There are also financiers with special unique financing talents who might be categorized as entrepreneurs of finance who, qua entrepreneurs, generate value-added over competing financial talent, but who qua capitalists, don't generate value-added (according to the leftist narrative).

Ultimately, it's this category of capital itself that is problematic under the leftist analysis; it increases the productivity of labor and yet how does category of labor not receive the full return on that productivity; how is it that this "surplus value" is "skimmed" off by owners of capital (qua owners of capital)?  The "pure" capitalist is one whose investments are of minimal risk (hence no risk premium that would "accrue" to investors in stocks), and basically reduce to collection of interest (or rent).  And supposedly that's just not fair.  Even though in its pure interest-form it's the result of a person using savings from previous income rather than consuming it.  And in "the ideology of capitalism," an individual has every right to save some income and collect interest as a result, if someone is willing to pay interest.  Somewhere in that transition from being a consumer to being a pure capitalist there is something sinister going on, in the leftist mindset.  If, say, you lend a (saved) portion of your income to an entrepreneur, who labors to generate profit, some portion of that profit goes to the saver: "created wealth has been skimmed off by a capitalist!"

Now, if you accept the narrative that something unjust is going on here in principle, you may well have a distorted understanding of the way the world works.  And that's even just using the "pure interest-gathering capitalist" example; the unique talents of entrepreneurs (Bezos), financiers (Buffett), etc., only begins to show that the rich often got that way by means of mental/intellectual talents that few possess, benefiting all sorts of others who don't possess that level of talent, and they didn't do so in the role of pure capitalist collecting interest on savings (which they could have consumed, etc.) that still increases the productivity of industry.

If, contrary to all economic logic and historical fact, what happens instead is that capital (qua such) increased the productivity of labor a ton and yet the category of pure labor (let's call it unskilled labor) had remained compensated the same as before, we might then say that "labor created all that extra wealth but only capital got the rewards".  (Again, one is distinguishing here pure-Moneybags from the Hank Rearden whose intellectual talents are in effect a gift to his steel workers.)  Indeed, that's the picture/scenario that Marx of Capital fame would have us believe: the ever-increasing (or never-improving) immiseration and impoverishment of the laborer/proletarian.  Except that even in his lifetime it would be next to impossible for Marx not to have been confronted with evidence that this was not happening - namely, the rise in wages in England especially after 1850.  But whether Marx acknowledged such evidence is another matter.  If he didn't, he would be a standard-bearer for how socialists/leftists have acknowledged (i.e., failed to acknowledge) evidence ever since.

(A more intellectually honest concern in this regard would have had to do with whether the vastly-expanding population in England would lead to a Malthusian trap in which population increases would always push wages to subsistence levels even as the capital accumulated and increased the total GDP, generating ever-larger returns to capital.)

Now, none of this should be any news to serious students of economics.  Any Marxists with minimal intellectual seriousness or integrity would have had to confront all these points at various times throughout the last century and a half.  It's not evident that AOC seriously considered any of this, but rather is spouting talking-points she would have gotten from being immersed mainly in leftist literature.  But what makes her case particularly egregious is that she has an econ degree, and yet she doesn't manifest awareness of what I just said is not news to serious students of economics.  In other words, she knows better, or ought to know better.

Now, there is one point where AOC gets into a more moralistic frame of mind above - although it's not unrelated to the strictly economic points - and I'll requote her:

"It’s just as much a transformation about [sic] bringing democracy to the workplace so that we have a say and that we don’t check all of our rights at the door every time we cross the threshold into our workplace," ...
One might refer to the moral issue raised here as the "what about the economic autonomy of those lower down the economic food chain" problem.  This is not one where I have an easy answer, given my professed devotion to the ideal of human flourishing or eudaimonia or self-actualizing (of which productive autonomy would be a part).  In saying that the answer here isn't an easy one, we're back to that original problem of AOC turning a morally-socioeconomically-politically complicated issue into an easy-sounding soundbite.  What does "bringing democracy to the workplace," for instance, require in practice?  Amazon (which AOC uninvited from her home city, so to speak) has an organizational structure where Bezos does work that no one else can do (else they would be doing it), and on what basis should he be taking orders from anyone else?  He does take orders -- from customers.  Leftists are really big on championing "the workers" and yet what do they think about customers?  Mises, to name a leading actual defender of capitalism, was really big on how the captains of industry must ultimately answer to customers.  Leftists seem to be lax about unionized industry that ill-serves customers.  (One thing about the public sector: does it have customers, exactly?  If not, does that distort incentives to serve the . . . not customers, exactly, but citizenry?)  Bezos does also have to answer to the shareholders, the financiers, but how do financiers make their money unless their financed enterprises serve the customers well enough?

(One leftist move is to replace "customer" with "consumer" and then bash "consumerism" as some byproduct of capitalism - which it may well be the case if we treat the economic as explanatorily-primary substructure and culture as superstructure.  But I find that explanatory materialism plausible only as applied to primitive, prehistorical, or early-historical humanity, before the advent especially of philosophy.  If we were to take seriously the original Marxoid historical materialism, in which "the class which has the means of material production at its disposal, has control at the same time over the means of mental production," then we'd have to believe that the bourgeoise controlled whatever came from Marx's own head.  Anyway, it's philosophy that would make people less consumerist and materialist (in the other sense, i.e., pursuing material possessions as a way of life).)

Attempts to implement alternatives to capitalism as mode of production all inevitably run into the very same set of problems that socialists cite as problems with capitalism.  (Which is to say in another way that the economic isn't explanatorily primary in the human condition.)  Let's say that we could even somehow "make the workplace democratic" in a serious sense of that phrase: that means that control of one's productive life is in the hands of a majority and not oneself.  How does that solve the worker-autonomy issue?  It can't be solved by placing such decisions in the hands of commissars and bureaucrats as under the state-run models.  Or, let's say that Amazon is "democratized" so that the shareholders and board of directors are replaced by "the workers" as primary decision-makers: where does Bezos then fit in?  Surely "the workers" want the best decision-making for the company they can hire out, and surely they wouldn't want the unique talents of a Bezos to go to waste.  (Right?)  So how does Bezos not not remain CEO, making his billions (in one form of compensation or another, be it via capitalist ownership of company value or an entrepreneurial salary) as before?  And once you start delegating things like that, rewarding for marginal value-added as a result of division of labor and specialization of talents and comparative advantage (hey, I have an econ degree, too...) and other related ideas (I did say that the moral issues involved are not unrelated to the economic ones, and if AOC's economic ideas are distorted here then her moral ones will be as well...), how does the socialist ideal not "devolve" ultimately back into capitalism?

What if "a worker" decides to save money and lend it at interest?  What if "a worker" decides to save up and start a business and hire whomever is willing to join the firm?  That doesn't sound "democratic" but it sure sounds like freedom and autonomy.

I think the (apparent or real) lack of autonomy that AOC sees in some fashion or other in capitalist enterprise has not just to do with the "hierarchical relation" involved, but with something about the human condition generally, or at least something about the condition of those with the least  productive or most easily-replaceable skills (and therefore very limited bargaining position), and it's not clear or easy to see how a reform of the mode of production is going to solve that.  If someone does have very minimal skills, is it a good idea to make them part of the "democratic" decision-making process, or are such decisions better left to others?  (What would Aristotle say about that?)

What if the limited-seeming autonomy of "the (lower-skilled?) worker" involved can be remedied to a considerable extent through philosophical education (preferably starting at as young an age as feasible)?  What if part of such education is learning in the art of eudaimonia to the extent that a person's talents can make happen?  Presumably, at least at the margins, such an education would enable more people to become less reliant upon others for making, e.g., entrepreneurial decisions for them.  Certainly it's nothing new that upgrading skills is a way for people to improve their economic situation (including their sense of autonomy), and the art of science of eudaimonia would be the most comprehensive approach to upgrading skills in life.  But note that the art or science of eudaimonia is a specifically philosophical enterprise - it is about how to organize one's life best - and that is explanatorily prior to the economic.


BONUS/ADDENDUM (with more positivity/less polemical focus):

The explanatory primacy of philosophy in human life comes to the fore when AOC discusses the topic (in a stopped-clock-is-right-sometimes kind of way) of the consequences of automation, also quoted in the linked article:

“We should be excited about automation, because what it could potentially mean is more time educating ourselves, more time creating art, more time investing in and investigating the sciences, more time focused on invention, more time going to space, more time enjoying the world that we live in,” she said, according to The Verge. “Because not all creativity needs to be bonded by wage.”
It ought to make one wonder how AOC can think of capitalism as such a shitty and irredeemable system when it's bringing about greater amounts of automation.  (This is evidence she just doesn't think this stuff through.)  Okay, even if she wasn't going to credit automation to capitalism, it doesn't look like capitalism is getting in the way.  Anyway, philosophy has to be explanatorily basic here because only it could best guide us in what to do with all that leisure time that automation would create.  It also has to be explanatorily basic when it comes to what exactly humans are going to do with their creative intellects once AI is capable of doing the same things human intellects can do and then some.  Think about the problem (if it is one) of automation making even a Bezos's skills no longer so unique that he would be in the category of "skilled labor" any longer.  What do we do with the free time that AI can't do?  Where do we go from there?  What autonomy could we really enjoy at that point, if delegating decision-making to more superior-skilled entities (agents?...) reduces one's own autonomy?  I mean, AOC/socialists' main complaint against capitalism is that it's "an ideology of capital" in which the human is subordinated to this inhuman machine, "capital," with its own logic of accumulation, etc.  What about what I'll dub "the ideology of intellect" which would make at least conceptual room for the human intellect to be subordinated to the intellect of AI machines.

The logical outcome of this would seem to be that we become "one" in some way with AI - in essence, to upgrade our human systems with AI tech.  Kurzweil notes that this is a point at which "humans transcend biology", although I'd like to raise the question: didn't humans begin "transcending biology" the moment they started using tools in any sophisticated way to improve their productivity beyond what their physical frames themselves could accomplish?  (Capitalism would only further facilitate advanced tool-usage.)  And so we've only been getting more and more sophisticated over time in transcending our biology?  Or is "biology-transcendence" more specific, i.e., we would no longer require a carbon-based substrate to sustain our (primarily intellectual?) lives?  (What happens when AI takes on the task of doing the most intellectually sophisticated of activities, such as philosophy?  "The highest responsibility of AI-philosophers is to serve as the guardians and integrators of AI-knowledge." -- AI-Rand?)  (And "we" are worried about how climate change might make our lives un-livable?)  And how would any of this eliminate the need for individual rights, including property rights in some form?  Are we to assume that "post-scarcity" post-humans would abandon the concepts of "mine and thine," the ultimate socialist ideal realized at last?  Somehow I'm skeptical.  But - just as it is today - the question of "mine and thine" in an ownership-rights sense would, in this AI-directed future, take a backseat to more philosophical-hierarchically basic matters such as life's meaning.  Or would hierarchical primacy take on a different form by then?  But given the place of philosophy in the hierarchy of knowledge, how would it?

Wednesday, February 13, 2019

The earth going forward

In a nutshell, the earth going forward will be affected by what human beings do.  This is why the era we are entering is now dubbed the Anthropocene.  There are two major trends going on right now: (1) technological maturation and (2) Stress on the ecological system.  (When I think of ecological stresses it's not just climate change that comes to mind; I also think of the acidification of the oceans, declining insect populations and biodiversity, destruction of the coral reefs, the Great Pacific Garbage Patch, antibiotic-resistant diseases, and other readily googlable troubling phenomena.)

(Also, any educated person these days should be considerably familiar with ourworldindata.org.)

In the light of exponential growth in technology which is now seeing AI or machine learning going mainstream, advances in robotics, nanotechnology on the near horizon, lab-grown meat becoming affordable around this year (which can only put some size dent in the consumption of meat grown even in organic and therefore more resource-intensive and therefore more expensive processes, along with the methane produced from such processes), production automation making goods and services ever more affordable (counteracting to a great extent supposed disemployment effects), and any number of other advances, it becomes very difficult to envision the future of humanity with much detail beyond a few years from now.  The most significant of the advances would probably be in the area of AI, for the same reason that intelligence-capable human beings mark a rather radical departure from nature's and life's original courses.  And you have to imagine AI helping humans solve problems in conjunction with their use of all the other new emerging technologies.

Climate change and other actual or potential ecological crises would definitely be a major problem going forward, if present human trends using present technology continue.  But the latter is not going to happen.  Do we really have any way of telling what the earth is going to be like in half a century?  By then will biodiversity be engineered by humans, the coral reefs restored, agriculture moved to laboratories, etc.?  How about any advances in human culture, e.g., philosophy (and therefore superior rationality, and ultimately Aristotelian-caliber rationality or intellectual perfectionism) for children becoming mainstream?  Will AI help humanity transcend its addictions to rationality-undermining facets of social media, which people are already well becoming sick of and looking for solutions to?

This seems to be a good time for bets to be placed as to whether this or that ecological challenge will be met by technological advances, and when.  If people have too little information to go on to make such bets, then that just reinforces my point here: we really don't know how the earth is going to look going all that much forward.  And maybe that's the source of present-day anxieties.  (We may be living dangerously, with all the psychological consequences of that.)

We might try to go 50 years into the past for some guide to what we might expect to transpire over the next 50 years.  51 years ago, Kubrick's 2001: A Space Odyssey was released.  (It was also one year before man first landed on the moon.)  There was inevitably some amount of speculation on Kubrick's (and author of the book version, Arthur C. Clark's) part, such as the form that advanced AI might take, with the eventually villainous HAL 9000 ("I'm sorry, Dave...").  But there was only so much that could be done even at the level of speculation, which the film's "mysterious" ending is meant to convey.  As Kubrick explained in interviews at the time, the Star Gate sequence and the resulting Star Child are meant as symbolic and/or allegorical depictions of humanity taking a "leap" to a higher level of being.  (The musical cue from "Also Sprach Zarathustra," Richard Strauss's musical tribute to Nietzsche's novel, appears in the film where the ape advances into man, and then when the man advances into the Star Child, which Kubrick directly refers to in interviews as a kind of superman.)  But the symbolic or allegorical treatment is replacement for literal depictions of futuristic humanity or contact with alien species (represented indirectly by the black monolith), because at that point we just wouldn't know.

This reveals a problem with a lot of non-Kubrick science fiction.  Take even such lauded sci-fi as Blade Runner, which occurs in Los Angeles of 2019.  At that time, there would be humanoid replicants who almost thoroughly successfully mimic human beings.  Somehow, humanity would have gotten to the point of creating such replicants without first thinking through the implications.  But it's precisely such cultural resources as Blade Runner that gets humanity to first think such things through.  It's why the year 1984 came to pass without the world becoming like Orwell's novel.  As China begins implementing its "social credits" system here very soon, it invites warnings and comparisons to Big Brother.  (It's hard to tell whether the concerns here are overblown.)

Another common element in a lot of sci-fi, save perhaps for Star Trek: the futures depicted are often dystopian -- i.e., that humanity misused its technology with the result often being that a tyrannical government or corporate entity used that technology to control or dehumanize people, use them for gory entertainment purposes, consume them, limit their lifespans, manipulate their minds, and so on.  Even with Star Trek and Star Wars, we see wars occurring, but what would motivate beings who are that technologically advanced (and, presumably, intellectually advanced as they use their technology to learn how to become more morally and aesthetically perfect?) to go to war?  The movie Independence Day (1996) depicts a hostile alien race - which has mastered interstellar travel - coming to earth to use its resources.  Perhaps going forward, humans will increasingly demand that movies with such dubious and intelligence-insulting premises not be made?  That alone would be a cultural improvement, and less wasteful of storytelling resources.  And becoming smarter and more efficient with resources is just part of humanity's technological improvement.

The same year as 2001's release, Paul R. Ehrlich foresaw doom with his book, The Population Bomb.  In 1980 he made a wager with economist Julian Simon, "betting on a mutually agreed-upon measure of resource scarcity over the decade leading up to 1990. ... Ehrlich lost the bet, as all five commodities that were bet on declined in price from 1980 through 1990, the wager period."  This strikes me as an instructive example of doom and gloom coming up against what Simon referred to as the ultimate resource: “skilled, spirited and hopeful people who will exert their will and imaginations for their own benefit, and so, inevitably, for the benefit of us all.”  In short, the human mind.

Which is to say, that one's level of anxiety over the future of planet earth is probably inversely proportional to one's confidence in the ability of humans to use their mental capacities to solve problems.

I can't say that I'm all that anxious about the condition of the earth going forward.

(My anxiety, if that's what it is, is more about how even intellectually- and culturally-advanced humans would manage to discover lasting meaning if/when they have all that extra time on their hands in a 'post-scarcity' era; I just hope beauty would always remain fulfilling, seeing as how 'living to kalon' - for the sake of the beautiful or noble or fine, where our values or needs are in harmonious proportion in a hierarchy (and wherein we discover our unique form of self-actualization or eudaimonia) - is ultimately the best theoretical accounting for our widely-shared commonsense standard of value that I can think of.  Perhaps that means humans eventually becoming essentially aesthetic-creative beings.  Is that what Nietzsche had in mind with the 'overman' idea?...)

Friday, April 19, 2013

Is it "later than we think"?

For those of you reading this in the year 2100 (I think the human race will make it till then), a bit of perspective: this past week's American news was focused almost entirely on the "Boston Marathon Bombing" and the aftermath that left four dead and well over one hundred wounded.  Within 4 days of this terror attack, one of the two suspects (young Muslim males, as it happens) was dead and the other in custody.  This was the story for 4 days, seemingly 24/7 on the cable news channels.

I figure if that was the main story of the week, it was - all things considered - a slow news week.  Bombings of this sort still happen quite frequently around the world in the year 2013, but this was one that hit home, hence the wall-to-wall news coverage.  Boston went into lockdown mode for most of today, but in all seriousness, if this is the main - seemingly exclusive - focus of news coverage for an entire week, just in how bad a state is the world in the year 2013, really?  I have in mind here Steven Pinker's recent work on the decline in violence (percentage-wise) over the course of human history.  Despite the troubles and challenges we all still face at this point in history, we should certainly step back and take the long view of these things.

Some 2,500ish years ago, the human race - in existence pretty much in present form for some hundred thousand years - entered what may well be termed an adolescent phase, a phase of questioning and examining pre-existing beliefs, with philosophers leading the way.  Back then, it is true, a philosopher could be sentenced to death by hemlock, but that wouldn't happen today (not in the West, anyhow).  At most he'd be assessed a fine.  Fast forward some 2,300ish years, and modern republican democracy is established in America, and that ethos spreads to much of the rest of the world over that time.  Slavery is no longer considered acceptable, women enjoy equal social status with men.  (Again, in the West.)  The agricultural revolution of thousands of years ago, along with human intellectual progress since that time, paved the way for the industrial revolution of the modern republican-democratic era.  A system that came to be termed 'capitalism' emerged and, after failed experiments in socialist models of production, it now looks to be here to stay for the foreseeable future, with modifications here and there.  Now, it appears that some new revolution, bringing the human race to the next stage of advancement, is in its infancy.  Within a couple centuries, the population boomed to over 7 billion, and in recent decades the global rate of poverty has been falling more and more towards zero.  Nuclear technology, almost the moment it was developed, was used to end a world war some 7 decades ago, and hasn't been employed in wartime since.  Back during those times, a bomb killing three and injuring scores of others was merely a small subset of a single day's bloody events.

If one were to look at the dystopian science fiction that emerged in the postwar era and lasting until roughly the internet age, one got the impression that by 2013 the world might plausibly be engaged in more world-warring, or nuking one another (how about the future dystopia depicted in the Terminator film of 1984, produced during a period of intense nuclear buildup between the U.S. and the Soviet Union?).  We don't have the flying cars yet, but neither has a world resembling Orwell's 1984 even remotely been realized, despite concerns in recent years about a military-industrial "surveillance state" (concerns that, voiced as they have been, have kept such activities of the state in check).  Note that big cities such as Boston now have security cameras that can used to survey public spaces, which were instrumental to tracking down the two bombing suspects in a relatively very short amount of time; the cultural norm of today is that privacy is naturally expected in one's own home, but there's no expectation of privacy in public spaces.  So we have had advances in technology in combination with evolved legal norms that, other things being equal, have made undetected criminal behavior that much more difficult to carry out.

As has been widely noted, including here on this blog, the democratization of the world means less warring between states.  Dystopian totalitarian scenarios appear to be a thing of the past, arguably in no small part due to the very warnings from observant and conscientious authors such as Orwell (and Rand!), and other public intellectuals.

According to the cheesy dystopian '70s and '80s sci-fi (ever see Logan's Run?  Jenny Agutter was hot, at least), the average human being in the year 2013 might turn on the television and be witness to the surreal - say, like, an inhuman "game show" such as The Running Man.  Well, it turns out that humans these days aren't nearly so eager to see their fellow humans being hunted down in such a fashion.

Yes, a truly bad candidate appeared on the Republican presidential ticket 5 years ago, a sign the country might have been going insane.  But the candidate ended up discredited due to diligent commentary in the blogosphere and other media.  Sure, there's an obesity epidemic in America, but fat-shaming has become a thing as a consequence.  At least the problem isn't the other way, as in a world running out of food.  Yes, global warming appears to be the biggest problem facing humankind in the coming decades, but . . .

Getting back to that thing about what we might see turning on our television sets in the year 2013.  How many have noticed just how beautiful Hi-Def television is?  I'm talking especially in terms of form of presentation; the content can certainly be improved.  But there's got to be some kind of theorizing among those in the field of aesthetics about the nature of Hi-Def television, else they will have failed at doing what they're supposed to be doing.  And let's keep in mind that Hi-Def television was not at all envisioned back in the 1980s, certainly not in the cheesy sci-fi movies.  If it had been envisioned back then, there would be a huge fortune to be made by the envisioner(s).  Or the smartphones and digital pads.  Do human beings these days realize, all things considered, just how good people have it these days?  And let's not forget about the way the internet has exploded and evolved as a medium of information and communication, and can only continue to do so.  Now this thing called 3-D printing appears to be hitting scalability.

Given the course of human history over the past few decades, we may well be in near-Singularity mode (the technological singularity, at least) as it is, because we don't seem to have any really clear idea how the world will look 10 years from now.  If we could, then - again - some huge fortunes can be made based upon some good predictions.  Kurzweil defines the technological singularity as the point when super-intelligent machines are created, which is supposedly some decades down the line.  Supposedly, in principle, they can be created, despite the present barriers we face with regard to reverse-engineering the human brain.  (Biological theories of consciousness seem to be what the philosophers are converging upon.  I think they might have figured that out a lot sooner had they paid more attention to Aristotle . . . but what the F do I know.)  And I don't see what else we could converge toward culture-wise than the whole Aristotelian-Jeffersonian-Randian-perfectivist paradigm.  Kurzweil has made his case in the technological realm; I believe I've amply demonstrated mine (here in this blog) for culture, at least in broad outline.

So where do we go from here?  Whatever it is, it ought to be really effing interesting.

So it looks like tomorrow, 4/20, at 4:20 p.m. (EST?) I go "on strike," which may very well contribute to the interesting-ness of whatever is to come.  I hardly have the faintest idea as to the what, when, where, how, etc.  We're just gonna have to find out, aren't we. ;-)

Sunday, September 16, 2012

A Deep Blue of Philosophy?

Just posted this to a reddit r/Objectivism thread:


I often find that when I come to some point of Objectivism (or anything else) that I critique, I employ the methods of Objectivist thinking and I can't think of any way around those methods, which emphasize the concepts of integration, context, and hierarchy. How does one attack those concepts without self-contradiction? Anyway, if I come upon something in Rand's writings that I think falls short, I critique it on the grounds that it isn't a correct application of her own prescribed methods. This is why I'm more comfortable calling myself a perfectivist than an Objectivist; it commits me to no doctrine or practice other than the relentless accumulation and integration of knowledge, like with Aristotle. There are chess grandmasters, and then there's Kasparov, and Aristotle is philosophy's Kasparov. (Which raises a question: could a philosophical "Deep Blue" be developed? Damn...has that question been asked before?)


Just to clear up a thing or two right away: Deep Blue isn't a conscious entity.  As it is there are discussions within philosophy of mind circles whether and how we can determine that a machine with the behavioral characteristics of HAL in 2001: A Space Odyssey is conscious.  HAL does display many if not all requisite characteristics of intelligence.  Deep Blue is far from a HAL, but much like HAL, is quite expert at performing the task of playing chess, but that is a rather limited task.

What led me quite quickly to think of HAL is what HAL is short for: Heuristic ALgorithmic.  I'm not an expert on what a heuristic algorithm involves, but I gather Deep Blue relies on such a princple.  (The term "heuristics" appears once in the Deep Blue wikipedia article, so I'm probably onto something.)

Something else to clear up: Where "the highest responsibility of human philosophers is to serve as the guardians and integrators of human knowledge" (Rand, ITOE), a Deep Blue machine wouldn't technically qualify as a philosophical machine because knowledge requires a consciousness.  (I'm pretty sure of that necessary connection but I'll think it through some more.)  What a Deep Blue machine, in the task of integration, would be integrating content without being aware of that content.  (I'm speaking here of the Deep Blue machine as it is now, not an extra-advanced one like HAL.)

But here's the interesting part: Say that scientists could program a machine of Deep Blue's computing power to crawl the web (Google, wikipedia, etc.), integrate its contents, and generate output for humans to work with.  Would that (not) be pretty awesome?  Would the task involve much greater complexity than that involved in playing a chess game while seeing 18 moves ahead?  Could such an algorithm be developed to hone in on what is essential content, and to hone in on connections between items of content, such as what terms in a wikipedia article are hyperlinked?  As has already been discovered, wikipedia has a hierarchical organization demonstrated through a certain pattern of hyperlinking practices, with approximately 95% of wikipedia entries leading to the Philosophy entry.  (This would come as a surprise to a lot of folks, but not the least bit of a surprise to Miss Rand, who, aside from penning endlessly-carictured novels, actually wrote things on the nature and role of philosophy in the human endeavor, and topics connected with that.  If this kind of stuff had already been spelled out in philosophy textbooks, I might have noticed.  Seeing as so few people acknowledge the fundamental role of philosophy in human life, I doubt this message, even if contained in textbooks, got through to the readers as it fucking well should have.)  Wikipedia is quite the example of a system of content, enabled by the development of the internet, that, qua mapping of territory, condenses or essentializes a vast array of territorial concretes.  (I think the term "encyclopedic knowledge" involves the same phenomenon, i.e., systematic essentialization, not necessarily an expertise or familiarity with the mind-boggling number of concretes that an essentialized system necessarily contains.  Encyclopedic knowledge isn't so concrete-bound.)

Hell, what might result if such a machine were set to the task of integrating the contents merely of a high-quality dictionary?

I'll leave the rest to the imagination.