In a nutshell, the earth going forward will be affected by what human beings do. This is why the era we are entering is now dubbed the Anthropocene. There are two major trends going on right now: (1) technological maturation and (2) Stress on the ecological system. (When I think of ecological stresses it's not just climate change that comes to mind; I also think of the acidification of the oceans, declining insect populations and biodiversity, destruction of the coral reefs, the Great Pacific Garbage Patch, antibiotic-resistant diseases, and other readily googlable troubling phenomena.)
(Also, any educated person these days should be considerably familiar with ourworldindata.org.)
In the light of exponential growth in technology which is now seeing AI or machine learning going mainstream, advances in robotics, nanotechnology on the near horizon, lab-grown meat becoming affordable around this year (which can only put some size dent in the consumption of meat grown even in organic and therefore more resource-intensive and therefore more expensive processes, along with the methane produced from such processes), production automation making goods and services ever more affordable (counteracting to a great extent supposed disemployment effects), and any number of other advances, it becomes very difficult to envision the future of humanity with much detail beyond a few years from now. The most significant of the advances would probably be in the area of AI, for the same reason that intelligence-capable human beings mark a rather radical departure from nature's and life's original courses. And you have to imagine AI helping humans solve problems in conjunction with their use of all the other new emerging technologies.
Climate change and other actual or potential ecological crises would definitely be a major problem going forward, if present human trends using present technology continue. But the latter is not going to happen. Do we really have any way of telling what the earth is going to be like in half a century? By then will biodiversity be engineered by humans, the coral reefs restored, agriculture moved to laboratories, etc.? How about any advances in human culture, e.g., philosophy (and therefore superior rationality, and ultimately Aristotelian-caliber rationality or intellectual perfectionism) for children becoming mainstream? Will AI help humanity transcend its addictions to rationality-undermining facets of social media, which people are already well becoming sick of and looking for solutions to?
This seems to be a good time for bets to be placed as to whether this or that ecological challenge will be met by technological advances, and when. If people have too little information to go on to make such bets, then that just reinforces my point here: we really don't know how the earth is going to look going all that much forward. And maybe that's the source of present-day anxieties. (We may be living dangerously, with all the psychological consequences of that.)
We might try to go 50 years into the past for some guide to what we might expect to transpire over the next 50 years. 51 years ago, Kubrick's 2001: A Space Odyssey was released. (It was also one year before man first landed on the moon.) There was inevitably some amount of speculation on Kubrick's (and author of the book version, Arthur C. Clark's) part, such as the form that advanced AI might take, with the eventually villainous HAL 9000 ("I'm sorry, Dave..."). But there was only so much that could be done even at the level of speculation, which the film's "mysterious" ending is meant to convey. As Kubrick explained in interviews at the time, the Star Gate sequence and the resulting Star Child are meant as symbolic and/or allegorical depictions of humanity taking a "leap" to a higher level of being. (The musical cue from "Also Sprach Zarathustra," Richard Strauss's musical tribute to Nietzsche's novel, appears in the film where the ape advances into man, and then when the man advances into the Star Child, which Kubrick directly refers to in interviews as a kind of superman.) But the symbolic or allegorical treatment is replacement for literal depictions of futuristic humanity or contact with alien species (represented indirectly by the black monolith), because at that point we just wouldn't know.
This reveals a problem with a lot of non-Kubrick science fiction. Take even such lauded sci-fi as Blade Runner, which occurs in Los Angeles of 2019. At that time, there would be humanoid replicants who almost thoroughly successfully mimic human beings. Somehow, humanity would have gotten to the point of creating such replicants without first thinking through the implications. But it's precisely such cultural resources as Blade Runner that gets humanity to first think such things through. It's why the year 1984 came to pass without the world becoming like Orwell's novel. As China begins implementing its "social credits" system here very soon, it invites warnings and comparisons to Big Brother. (It's hard to tell whether the concerns here are overblown.)
Another common element in a lot of sci-fi, save perhaps for Star Trek: the futures depicted are often dystopian -- i.e., that humanity misused its technology with the result often being that a tyrannical government or corporate entity used that technology to control or dehumanize people, use them for gory entertainment purposes, consume them, limit their lifespans, manipulate their minds, and so on. Even with Star Trek and Star Wars, we see wars occurring, but what would motivate beings who are that technologically advanced (and, presumably, intellectually advanced as they use their technology to learn how to become more morally and aesthetically perfect?) to go to war? The movie Independence Day (1996) depicts a hostile alien race - which has mastered interstellar travel - coming to earth to use its resources. Perhaps going forward, humans will increasingly demand that movies with such dubious and intelligence-insulting premises not be made? That alone would be a cultural improvement, and less wasteful of storytelling resources. And becoming smarter and more efficient with resources is just part of humanity's technological improvement.
The same year as 2001's release, Paul R. Ehrlich foresaw doom with his book, The Population Bomb. In 1980 he made a wager with economist Julian Simon, "betting on a mutually agreed-upon measure of resource scarcity over the decade leading up to 1990. ... Ehrlich lost the bet, as all five commodities that were bet on declined in price from 1980 through 1990, the wager period." This strikes me as an instructive example of doom and gloom coming up against what Simon referred to as the ultimate resource: “skilled, spirited and hopeful people who will exert their will and imaginations for their own benefit, and so, inevitably, for the benefit of us all.” In short, the human mind.
Which is to say, that one's level of anxiety over the future of planet earth is probably inversely proportional to one's confidence in the ability of humans to use their mental capacities to solve problems.
I can't say that I'm all that anxious about the condition of the earth going forward.
(My anxiety, if that's what it is, is more about how even intellectually- and culturally-advanced humans would manage to discover lasting meaning if/when they have all that extra time on their hands in a 'post-scarcity' era; I just hope beauty would always remain fulfilling, seeing as how 'living to kalon' - for the sake of the beautiful or noble or fine, where our values or needs are in harmonious proportion in a hierarchy (and wherein we discover our unique form of self-actualization or eudaimonia) - is ultimately the best theoretical accounting for our widely-shared commonsense standard of value that I can think of. Perhaps that means humans eventually becoming essentially aesthetic-creative beings. Is that what Nietzsche had in mind with the 'overman' idea?...)