646jili login

Sowei 2025-01-12
SAN FRANCISCO — The parents of a former OpenAI researcher known for recently blowing the whistle on the company’s business practices are questioning the circumstances of their son’s death last month. In an interview this week, Suchir Balaji’s mother and father expressed confusion and shock over his sudden passing, expressing doubt their son could have died by suicide, as determined by the county medical examiner. The family hired an expert to perform an independent autopsy but has yet to release the report’s findings. “We’re demanding a thorough investigation — that’s our call,” said Balaji’s mother, Poornima Ramarao. San Francisco police found Balaji dead in his Lower Haight apartment on Nov. 26, less than a week after his 26th birthday. The San Francisco Medical Examiner’s Office later told this news agency his death was ruled a suicide, though a final autopsy report has yet to be released while the office completes toxicology tests. Earlier this month, San Francisco police officials said there is “currently, no evidence of foul play.” Balaji’s death sent shockwaves throughout Silicon Valley and the artificial intelligence industry. He garnered a national spotlight in late October when he accused his former employer, OpenAI, of breaking federal copyright law by siphoning data from across the internet to train its blockbuster chatbot, ChatGPT. His concerns backed up allegations aired in recent years by authors, screenwriters and computer programmers who say OpenAI stole their content without permission, in violation of U.S. “fair use” laws governing how people can use previously published work. Media companies have been among those to sue the company, including The Mercury News and seven of its affiliated newspapers, and, separately, The New York Times. In an interview with The New York Times published in October 2024, Balaji described his decision to leave the generative artificial intelligence company in August while suggesting that its data collection practices are “not a sustainable model for the internet ecosystem as a whole. “If you believe what I believe, you have to just leave the company,” he told the newspaper. By Nov. 18, Balaji had been named in court filings as someone who had “unique and relevant documents” that would support the case against OpenAI. He was among at least 12 people — many of them past or present OpenAI employees — to be named by the newspaper in court filings as having material helpful to their case. His death a week later has left Balaji’s parents reeling. In an interview at their Alameda County home this week, his mother said her only child “was an amazing human being, from childhood.” “No one believes that he could do that,” Ramarao said about his taking his own life. OpenAI did not immediately respond to a request for comment but in a statement to Business Insider said it was “devastated” to learn of Balaji’s death and said they had been in touch with his parents “to offer our full support during this difficult time.” “Our priority is to continue to do everything we can to assist them,” the company’s statement read. “We first became aware of his concerns when The New York Times published his comments and we have no record of any further interaction with him. “We respect his, and others’, right to share views freely,” the statement added. “Our hearts go out to Suchir’s loved ones, and we extend our deepest condolences to all who are mourning his loss.” Related Articles National News | What is the Native American Church and why is peyote sacred to members? National News | Court rules Georgia lawmakers can subpoena Fani Willis for information related to her Trump case National News | 4 people found dead in N.H. home in suspected carbon monoxide poisoning National News | All 6 victims hurt by yellow taxi van driver outside Manhattan Macy’s are tourists National News | Powerful thunderstorms threaten Texas and Louisiana, delaying holiday travel Born in Florida and raised in the Bay Area, Balaji was a prodigy from an early age, his mother told this news agency. He spoke her name at 3 months old; at 18-months he would ask “me to light a lamp to cheer me up” and could recognize words at 20 months, she said. Balaji appeared to have a knack for technology, math and computing, taking home trophies and earning renown, including in the 2016 United States of America Computing Olympiad. In 2020, he went to work for OpenAI — viewing the company’s then-commitment to operating as a nonprofit as admirable, his mother said. His opinion of the company soured in 2022 while he was assigned to gather data from the internet for the company’s GPT-4 program, the New York Times reported. The program analyzed text from nearly the entire internet to train its artificial intelligence program, the outlet reported. Ramarao said she wasn’t aware of her son’s decision to go public with his concerns about OpenAI until the paper ran his interview. While she immediately harbored anxiety about his decision — going so far as to implore him to speak with a copyright attorney — Ramarao also expressed pride in her son’s bravery. ‘He kept assuring me, ‘Mom, I’m not doing anything wrong — go see the article. I’m just saying, my opinion, there’s nothing wrong in it,” said Ramarao, herself a former employee of Microsoft who worked on its Azure cloud computing program. “I supported him. I didn’t criticize him. I told him, ‘I’m proud of you, because you have your own opinions and you know what’s right, what’s wrong.’ He was very ethical.” After leaving the company, Balaji settled on plans to create a nonprofit, one centering on the machine learning and neurosciences fields, Ramarao said. He had already spoken to at least one venture capitalist for seed funding, she said. “I’m asking, like, ”How will you manage your living?’ ” Ramarao said. She recalled how her son repeatedly tried to assuage any concerns about his finances, suggesting that “money is not important to me — I want to offer a service to humanity.” Balaji also appeared to be keeping a busy schedule. He turned 26 while on a backpacking trip in the Catalina Islands with several friends from high school. Such trips were commonplace for him — in April he went with several friends to Patagonia and South America. Balaji last spoke to his parents on Nov. 22, a 10-minute phone call that centered around his recent trip and that ended with his talking about getting dinner. “He was very happy,” Ramarao said. “He had a blast. He had one of the best times of his life.” Ramarao remembers calling her son shortly after noon on Nov. 23 but said it rang once and went to voicemail. Figuring that he was busy with friends, she didn’t try visiting his apartment until Nov. 25, when she knocked but got no answer. She said she called authorities that evening but was allegedly told by a police dispatch center that little could be done that day. She followed up Nov. 26, and San Francisco police later found Balaji’s body inside his apartment. Ramarao said she wasn’t told of her son’s death until a stretcher appeared in front of Balaji’s apartment. She was not allowed inside until the following day. “I can never forget that tragedy,” Ramarao said. “My heart broke.” Ramarao questioned authorities’ investigation of her son’s death, claiming that San Francisco police closed their case and turned it over to the county medical examiner’s office within an hour of discovering Balaji’s body. Ramarao said she and her husband have since commissioned a second autopsy of Balaji’s body. She declined to release any documents from that examination. Her attorney, Phil Kearney, declined to comment on the results of the family’s independent autopsy. Last week, San Francisco police spokesman Evan Sernoffsky referred questions about the case to the medical examiner’s office. David Serrano Sewell, executive director of the Office of the Chief Medical Examiner, declined to comment. Sitting on her living room couch, Ramarao shook her head and expressed frustration at authorities’ investigative efforts so far. “As grieving parents, we have the right to know what happened to our son,” Ramarao said. “He was so happy. He was so brave.” If you or someone you know is struggling with feelings of depression or suicidal thoughts, the 988 Suicide & Crisis Lifeline offers free, round-the-clock support, information and resources for help. Call or text the lifeline at 988, or see the 988lifeline.org website, where chat is available.646jili login

Article content The Ontario Divisional Court will hold a judicial review hearing Wednesday for a Windsor police officer convicted and punished for his $50 donation to the Freedom Convoy in 2022. Const. Michael Brisco was found guilty of discreditable conduct in March 2023 following a six-day Police Services Act hearing. He was ordered to forfeit 80 hours of pay. Brisco lost a subsequent appeal of that conviction before the Civilian Police Commission (OCPC). “Canadians in any profession should be free to express themselves on whatever political issue they feel strongly about,” Darren Leung, a lawyer with the Justice Centre for Constitutional Freedoms, which is representing Brisco, said in a statement. “This case will test freedom of expression and the right of all Canadians to donate to the causes of their choice without fear of punishment,” the organization said Monday. “Constable Brisco should not be punished for supporting a perfectly legal protest which certain politicians such as the Prime Minister disliked,” said Leung. At the time of his anonymous donation, the veteran officer with an otherwise exemplary record said it was intended for the protesters in downtown Ottawa, not those participating in the Ambassador Bridge blockade at the same time by other opponents of government COVID-19 mandates. At the original hearing, the prosecution had argued for a harsher penalty, saying Brisco’s donation had brought the Windsor Police Service into “disrepute” at a time his uniformed colleagues were trying to dismantle the bridge blockade. The OCPC, an independent, quasi-judicial agency whose function includes hearing appeals of disciplinary decisions, agreed the penalty was “significant” but not unreasonable. The said it will argue this week that Brisco made his donation not in his capacity as a police officer, but anonymously and while he was on unpaid leave (for refusing to take the available COVID-19 vaccine). Part of Brisco’s appeal will be arguing that evidence of the donation only became public after an “illegal” hack into a crowdfunding platform that was then used by the OPP to track down officers who donated. The Justice Centre said it will argue that the earlier disciplinary findings were based on media reports regarding “opinions that the Freedom Convoy was illegal,” which falls short of the necessary “clear and convincing” standard to support a finding of discreditable conduct. Key facts in the case included Brisco’s donation coming days after Ottawa’s police chief had deemed his city’s as unlawful; Prime Minister Trudeau saying the protest there was “becoming unlawful;” and Ontario Premier Doug Ford calling it an “occupation.” The Divisional Court is a branch of the Superior Court of Justice and hears statutory appeals from administrative tribunals in Ontario. Brisco’s hearing is in Toronto, the only city where the Divisional Court sits regularly throughout the year. As recently as last month, the Star reported the City of Windsor was suing the federal government, still trying to recover the balance of approximately $900,000 of the nearly $7-million response to the week-long 2022 blockade, most of which went to policing and legal fees. Ottawa has already reimbursed Windsor the $6.1 million it says was refundable by the federal government.

Kirby Smart told by Julian Lewis exactly why he rejected Georgia for ColoradoJets running back Hall 'looks promising' to play vs. Jags, but cornerback Reed is doubtfulLos Angeles Chargers (7-4) at Atlanta (6-5) Sunday, 1 p.m. EST, CBS BetMGM NFL Odds: Chargers by 1 1/2 Series record: Falcons lead 8-4. Against the spread: Chargers 7-3-1, Falcons 5-6. Last meeting: Chargers beat Falcons 20-17 on Nov. 6, 2022, in Atlanta. Last week: Ravens beat Chargers, 30-23; Falcons had bye week following 38-6 loss at Denver on Nov. 17. Chargers offense: overall (21), rush (13), pass (20), scoring (18). Chargers defense: overall (13), rush (10), pass (10), scoring (13). Falcons offense: overall (8), rush (14), pass (5), scoring (16). Falcons defense: overall (25), rush (19), pass (26), scoring (26). Turnover differential: Chargers plus-8, Falcons minus-3. RB Gus Edwards could move up as the lead back for Los Angeles as J.K Dobbins (knee) is expected to miss the game . Edwards was activated from injured reserve earlier this month following an ankle injury and had nine carries for 11 yards with a touchdown in Monday night's 30-23 loss to Baltimore. WR Drake London has 61 catches, leaving him four away from becoming the first player in team history to have at least 65 receptions in each of his first three seasons. London has 710 receiving yards, leaving him 140 away from becoming the first player in team history with at least 850 in each of his first three seasons. Falcons RB Bijan Robinson vs. Chargers run defense. Robinson was shut down by Denver, gaining only 35 yards on 12 carries, and the Atlanta offense couldn't recover. The Chargers rank 10th in the league against the run, so it will be a challenge for the Falcons to find a way to establish a ground game with Robinson and Tyler Allgeier. A solid running attack would create an opportunity for offensive coordinator Zac Robinson to establish the play-action passes for quarterback Kirk Cousins. Dobbins appeared to injure his right knee in the first half of the loss to the Ravens, though coach Jim Harbaugh did not provide details. ... The Falcons needed the bye to give a long list of injured players an opportunity to heal. WR WR KhaDarel Hodge (neck) did not practice on Wednesday. WR Darnell Mooney (Achilles), CB Kevin King (concussion), DL Zach Harrison (knee, Achilles) and WR Casey Washington (concussion) were hurt in the 38-6 loss at Denver on Nov. 17 and were limited on Wednesday. CB Mike Hughes (neck), nickel back Dee Alford (hamstring), ILB Troy Andersen (knee), TE Charlie Woerner (concussion) and ILB JD Bertrand (concussion) also were limited on Wednesday after not playing against Denver. C Drew Dalman (ankle) could return. The Chargers have won the past three games in the series following six consecutive wins by the Falcons from 1991-2012. Los Angeles took a 33-30 overtime win in Atlanta in 2016 before the Chargers added 20-17 wins at home in 2020 and in Atlanta in 2022. The Falcons won the first meeting between the teams, 41-0 in San Diego in 1973. Each team has built its record on success against the soft NFC South. Atlanta is 4-1 against division rivals. Los Angeles is 2-0 against the NFC South this season. The Chargers have a four-game winning streak against the division. ... Atlanta is 0-2 against AFC West teams, following a 22-17 loss to Kansas City and the lopsided loss at Denver. They will complete their tour of the AFC West with a game at the Las Vegas Raiders on Dec. 16. ... The Falcons are the league's only first-place team with a negative points differential. Atlanta has been outscored 274-244. The loss of Dobbins, who has rushed for eight touchdowns, could put more pressure on QB Justin Hebert and the passing game. Herbert's favorite option has been WR Ladd McConkey, who has four TD receptions among his 49 catches for 698 yards. McConkey, the former University of Georgia standout who was drafted in the second round, could enjoy a productive return to the state against a Falcons defense that ranks only 26th against the pass. AP NFL: https://apnews.com/hub/nfl

Cardinals' sudden 3-game tailspin has turned their once solid playoff hopes into a long shotNEW YORK (AP) — With the end of 2024 around the corner, you might be reflecting on financial goals for 2025. Whether you're saving to move out of your parents' house or pay off student loan debt, financial resolutions can help you stay motivated, said Courtney Alev, consumer advocate for Credit Karma. “Entering a new year doesn’t erase all our financial challenges from the prior year," Alev said. “But it can really help to bring a fresh-start mentality to how you’re managing your finances.” If you’re planning to make financial resolutions for the new year, experts recommend that you start by evaluating the state of your finances in 2024. Then, set specific goals and make sure they're attainable for your lifestyle. Here are some tips from experts: Think about how you currently deal with finances — what's good, what's bad, and what can improve. “Let this be the year you change your relationship with money,” said Ashley Lapato, personal finance educator for YNAB, a budgeting app. If you feel like money is a chore, that there's shame surrounding the topic of money, or like you were born being “bad at money,” it's time to change that mentality, Lapato said. To adjust your approach, Lapato recommends viewing money goals as an opportunity to imagine your desired lifestyle in the future. She recommends asking questions like, “What do my 30s look like? What do my 40s look like?” and using money as a means to get there. Liz Young Thomas, head of SoFi Investment Strategy, added that it’s key you forgive yourself for past mistakes in order to move into the new year with motivation. When setting your financial resolutions for 2025, it's important to establish the “why” of each, said Matt Watson, CEO of Origin, a financial tracking app. “If you can attach the financial goal to a bigger life goal, it’s much more motivating and more likely you’ll continue on that path,” Watson said. Whether you're saving to buy a house, pay off credit card debt or take a summer vacation, being clear about the goal can keep you motivated. Watson also recommends using a tool to help you keep track of your finances, such as an app, spreadsheet, or website. “After three years of inflation, your pay increases are likely still playing catch up to your monthly expenses, leaving you wondering where all the money is going," said Greg McBride, chief financial analyst at Bankrate. "Make that monthly budget for 2025 and resolve to track your spending against it throughout the year." McBride said that you may need to make adjustments during the year as certain expenses increase, which would require cutting back in other areas. “Calibrate your spending with your income, and any month you spend less than budgeted, transfer the difference into your savings account, ideally a high-yield savings account,” he said. “Interest rates aren’t likely to come down very fast, so you’re still going to have to put in the hard work of paying down debt, especially high-cost credit card debt, and do so with urgency,” McBride said. Start by taking stock of how much debt you have now relative to the beginning of the year. Hopefully you’ve made steady progress on paying it down, but, if you’ve gone in the other direction, McBride encourages making a game plan. That includes looking into 0% balance transfer offers. “You have more power over credit card interest rates than you think you do," said Matt Schulz, chief credit analyst at LendingTree. “Wielding that power is one of the best moves you can make in 2025.” A 0% balance transfer credit card is “a good weapon” in the fight against high card APRs, or annual percentage rates, he said. A low-interest personal loan is an option as well. You may simply be able to pick up the phone and ask for a lower interest rate. LendingTree found that a majority of people who did that in 2024 were successful, and the average reduction was more than 6 points. When planning for your financial resolutions, it’s important to consider how you’re going to make your goals sustainable for your lifestyle, said Credit Karma's Alev. “It really is a marathon, not a sprint,” Alev said. Alev recommends setting realistic, practical goals to make it easier to stick with them. For example, instead of planning to save thousands of dollars by the end of the year, start by saving $20 a paycheck. Even when your plans are achievable, there are times you'll get derailed. Maybe it’s an unexpected medical bill or an extraordinary life event. When these situations happen, Alev recommends trying not to feel defeated and working to get back on track without feeling guilty. “You can't manage what you can't see, so set a New Year’s resolution to check your credit score monthly in 2025," said Rikard Bandebo, chief economist at VantageScore. “Be sure to pay more than the minimum on your credit accounts, as that's one of the best ways to boost your credit score.” Bandebo also advises student loan borrowers to make all payments on time, as servicers will begin to report late payments starting in January, and missed payments will affect borrowers' credit scores. Automated changes, like increasing workplace 401(k) plan contributions, setting up direct deposits from paychecks into dedicated savings accounts, and arranging for monthly transfers into an IRA and/or 529 college savings accounts all add up quickly, McBride said. Your financial goals can encompass more than just managing your money better — they can also be about keeping your money safe from scams . A golden rule to protect yourself from scams is to “slow down,” said Johan Gerber, executive vice president of security solutions at Mastercard. “You have to slow down and talk to other people if you’re not sure (whether or not) it’s scam,” said Gerber, who recommends building an accountability system with family to keep yourself and your loved ones secure. Scammers use urgency to make people fall for their tricks, so taking your time to make any financial decision can keep you from losing money. Your financial goals don’t always have to be rooted in a dollar amount — they can also be about well-being. Finances are deeply connected with our mental health, and, to take care of our money, we also need to take care of ourselves. “I think that now more than any other year, your financial wellness should be a resolution," said Alejandra Rojas, personal finance expert and founder of The Money Mindset Hub, a mentoring platform for women entrepreneurs. "Your mental health with money should be a resolution.” To focus on your financial wellness, you can set one or two goals focusing on your relationship with money. For example, you could find ways to address and resolve financial trauma, or you could set a goal to talk more openly with loved ones about money, Rojas said. —— The Associated Press receives support from Charles Schwab Foundation for educational and explanatory reporting to improve financial literacy. The independent foundation is separate from Charles Schwab and Co. Inc. The AP is solely responsible for its journalism.

AP Business SummaryBrief at 3:26 p.m. EST

MLB insider explains why Red Sox will need to ‘pay extra’ for free agents

Last year, we published a series about what Google had done to the web, capped off by a feature about search engine optimization titled “The People Who Ruined the Internet.” It made more than a few SEO experts upset (which was tremendously fun for me because I love watching people yell at Nilay on various social platforms). But a year has passed, and we’ve had a change of heart. Maybe search engine optimization is actually a thing. Maybe appeasing the search algorithm is not only a sustainable strategy for building a loyal audience, but also a strategic way to plan and produce content. What are journalists, if not content creators? Anyway, SEO community, consider this our apology. And what better way to say “our bad, your industry is not a cesspool of AI slop but a brilliant vision of what a useful internet could look like” than collecting all the things we’ve learned in one handy print magazine? Which is why I’m proud to introduce Just kidding! (You weren’t fooled for a second, were you?) If you pull back the cover, you’ll discover the real magazine: , an anthology of stories about “content” and the people who “make” it. In very fashion, we are meeting the moment where the internet has been overrun by AI garbage by publishing a beautifully designed, limited edition print product. (Also, the last time we printed a magazine, it won a very prestigious design award.) collects some of our best stories over the past couple years, capturing the cynical push for the world’s great art and journalism to be reduced into units that can be packaged, distributed, and consumed on the internet. Consider as our resistance to that movement. With terrific new art and photography, we’re making the case that great reporting is vital and enduring — and worth paying for. This gorgeous, grotesque magazine can be yours if you commit to an annual subscription to while supplies last You can read more about our subscription here.

In the digital age, information flows at speeds and scales that are unprecedented in human history. Social media platforms, digital news outlets, and personal devices collectively serve as both mirrors and engines of cultural discourse. Within this vast ecosystem, a new frontier has emerged: synthetic media or AI-generated media. And with its advanced outpacing our ability to corral its impact, we are headed for trouble. Synthetic media encompasses anything from deepfakes—highly convincing audio-visual fabrications—to seemingly benign AI-driven marketing campaigns. Although synthetic media hold transformative potential for creative expression, storytelling, and other constructive uses, they also possess the capacity to disrupt factual consensus, exploit cognitive biases, and further polarize social and political communities. This risk is compounded by lagging regulations and an under-informed public. Deepfakes, in particular, have transitioned from obscure internet novelty to a major concern for politicians, corporations, and everyday people. These manipulations often appear so authentic that viewers can be easily misled into believing false narratives or malicious content. Beyond the realm of video, AI systems can create deceptive audio and text that masquerade as human-generated. As large language models continue to evolve, the line between human and machine authorship becomes increasingly blurred, raising ethical and legal questions about authenticity, accountability, and transparency. The consequences of failing to regulate and label AI-generated media can be dramatic. Consider how misleading content might alter electoral outcomes, stoke social conflicts, damage reputations, or lead to fraudulent activities. These risks are not hypothetical; examples have already surfaced globally, with high-profile incidents where political leaders were impersonated or where “evidence” of events that never occurred went viral. Urgent calls to regulate AI-generated media are therefore not alarmist—they reflect a pragmatic response to a rapidly escalating threat landscape. One crucial reason for the urgency in regulating AI-generated media is rooted in our cognitive wiring. Humans evolved to process visual and auditory cues quickly, relying on these cues for survival. Our ancestors formed snap judgments about threats or opportunities in part because of the speed at which they could interpret sensory data. This evolutionary trait endures in modern times: we tend to believe our eyes and ears, and this trust in our sensory perception underpins the credibility we accord to photographs, videos, or audio recordings. Deepfakes exploit this trust. A well-crafted synthetic video or audio clip triggers the same cognitive mechanisms that authenticate what we see or hear in everyday life. Moreover, because technology increasingly blurs the boundary between what is computer-generated and what is real footage, people lack the inherent “cognitive safe-guards” or “skepticism filters” that would otherwise protect them. This vulnerability is especially pronounced when we are emotionally invested in the content—such as a purported leaked video supporting our political beliefs or exposing the misdeeds of a public figure we may already distrust. Beyond the broader evolutionary tendency to trust our senses, deepfakes and other forms of AI-generated content can exploit a variety of cognitive biases: : We naturally gravitate toward information that aligns with our preexisting beliefs. AI-generated content that confirms our worldview—whether it is a faked video showing a rival politician in a compromising position or marketing material suggesting our lifestyle is superior—reinforces that belief. This is especially problematic in online echo chambers and algorithmic social media, where such content can spread unchecked. : We often judge the likelihood of events by how easily examples come to mind. If deepfakes featuring a specific type of scandal become widespread, we are more likely to assume that such scandals are common and, consequently, believe them more readily. : Early impressions matter. The first piece of information we see about a topic often becomes the benchmark against which subsequent information is compared. A viral AI-manipulated video that spreads quickly can set a narrative “anchor” in the public’s mind, making corrections or denials less persuasive later. At the heart of many disinformation campaigns lies the “illusory truth effect,” a well-documented psychological phenomenon in which repeated exposure to a statement increases the likelihood of individuals accepting it as true. Even if the content is labeled as false or is obviously misleading upon careful inspection, frequent repetition can transform falsehoods into something that “feels” true. Deepfakes and AI-generated texts can be replicated or disseminated easily, enabling bad actors to harness this effect at scale. For instance, a deepfake might be briefly posted on social media—enough to generate initial traction and headlines—and then taken down or debunked. The image or snippet of the fake can continue circulating in people’s memories or reappear elsewhere, fortifying the original false impression. Without clear, consistent labeling mechanisms to counteract this cyclical exposure, the illusion can become a self-reinforcing loop in the public sphere. The introduction of malicious deepfakes into the public discourse raises the specter of heightened political polarization. As misinformation spreads, groups on different sides of the ideological spectrum may become entrenched in opposing “realities,” each bolstered by fabricated evidence that appears legitimate. This polarized environment fosters a climate of hostility and erodes the possibility of reasoned debate or consensus-based decision-making. Moreover, polarizing content tends to garner more clicks, shares, and comments—a phenomenon that social media algorithms can inadvertently amplify. When platform engagement metrics favor content that triggers strong emotional reactions, deepfakes that evoke outrage or support particular biases become hot commodities in the information marketplace, spiraling ever outward and forming a vicious cycle of mistrust. AI-manipulated media also risks reinforcing societal biases in more insidious ways. Deepfakes can be used to stage events that validate racial, gender, or cultural stereotypes. For example, an unscrupulous individual might distribute a manipulated video that portrays certain ethnic or religious groups in a negative light, fueling xenophobic or racist sentiments. Even if the content is later revealed as inauthentic, the initial exposure can have lasting effects. People who already harbor prejudices may use the deepfake as retroactive “proof” of their biases, while those previously neutral might become more susceptible to persuasion. This cycle not only marginalizes vulnerable communities but may also stoke social and political unrest. The ultimate casualty in an environment saturated with unmarked AI-generated media is a collectively agreed-upon reality. Democracy and social cohesion hinge upon the ability to arrive at shared facts—from the outcome of elections to scientific data on public health. When any piece of evidence can be digitally fabricated or manipulated, skepticism escalates and conspiratorial thinking can flourish. : Grassroots movements often rely on viral videos or audio clips to disseminate evidence of social injustices or to call for political change. If the authenticity of such evidence is routinely called into question, activism may lose its momentum. Conversely, maliciously designed deepfakes could falsely implicate activists in wrongdoing, discrediting their causes. : Ordinary citizens are inundated with content daily, from social media posts to streaming services. Without clear cues, it becomes harder for them to filter real events from artificial fabrications. As trust diminishes, a general malaise or cynicism can set in, dissuading people from civic engagement or even basic media consumption. : Communities lacking media literacy or robust digital infrastructure may be even more vulnerable to deepfake-driven manipulation. In regions with limited access to fact-checking resources or high barriers to digital literacy, malicious content can gain traction rapidly. Similarly, older adults may be more prone to believing doctored videos, given they grew up in an era where the public generally trusted film or television footage as verifiable proof. The rapid evolution of AI outpaces the slower, methodical processes of legislative bodies. While lawmakers debate and study the implications, new algorithms make the creation of deepfakes more efficient and convincing. The cost barrier is dropping; what once required a well-funded lab can now be done on a laptop with open-source tools. Malicious actors—be they private trolls, political propagandists, or even foreign adversaries—are quick to exploit this. Delayed responses grant these actors a substantial head start. They can shape public perceptions in ways that are difficult to reverse, especially when global events—elections, international conflicts, or public health crises—hang in the balance. Lessons from prior disinformation campaigns show that once a narrative takes root, it can persist long after fact-checks and retractions. : In 2020, a manipulated video of a prominent politician slurring words circulated widely, causing uproar among opponents and concern among supporters. Although debunked days later, the initial impact on public opinion had already been registered in poll data. : CEOs and CFOs have been impersonated via AI-generated voice technology, instructing subordinates to transfer funds or provide sensitive company information. In several known cases, companies lost millions of dollars before realizing the voice messages were fabricated. : Faked videos purporting to show atrocities committed by one side in a regional conflict have the capacity to incite violence. When these videos go viral and are further amplified by local media, the risk of escalation grows dramatically. Regulatory measures and explicit labeling protocols must adopt a preemptive, rather than reactive, stance. Instead of waiting for catastrophic misuse to illustrate just how damaging deepfakes can be, policymakers and technology companies can collaborate on robust frameworks to identify, label, and remove malicious content. By setting a strong precedent early, societies can minimize the risk of normalizing deception. : One of the simplest methods to label synthetic media involves text overlays within the video or image. For instance, the corners of a video could carry watermarks stating “AI-Generated” or “Digitally Altered.” While watermarks can be removed by a sophisticated manipulator, a standardized approach across platforms would help consumers quickly identify legitimate versus suspicious content. : Beyond visible overlays, invisible digital watermarks embedded in the file’s data can serve as a more tamper-resistant form of labeling. Any attempt to alter the file or remove the watermark would ideally degrade the quality or otherwise be detectable by specialized tools. : When AI-generated media is played—whether it is a video or an audio clip—platforms could require a brief disclaimer that states: “The following content has been identified as AI-generated.” This approach, similar to content warnings, can preempt potential misunderstandings and encourage viewers or listeners to approach the material with a critical eye. Social media platforms, streaming services, and other digital outlets are at the vanguard of content distribution. Their role in combating synthetic disinformation is critical: : Platforms can invest in AI-driven detection algorithms that continually scan uploaded content for known markers of manipulation (e.g., inconsistencies in lighting or facial movement). Although detection algorithms are in a cat-and-mouse game with deepfake generation, continued innovation and real-time updates can mitigate large-scale malicious dissemination. : Just as users can report spam or hate speech, platforms could introduce specialized reporting categories for suspected deepfakes. Advanced user communities, such as professional fact-checkers and journalists, can further support the verification process. : Clear guidelines are needed so that moderators know how to handle suspected deepfakes. This includes removal timelines, appeals processes, and transparency reports that show how many pieces of deepfake content were flagged and removed. The arms race between deepfake creators and detection tools is well underway. Several promising methods focus on subtle artifacts or “fingerprints” left by generative models—for example, unnatural blinking patterns, inconsistencies in lighting, or abnormal facial muscle movements. As generative models become more advanced, detection approaches must keep pace by training on the latest synthetic data. Machine learning experts emphasize that no single detection method is a silver bullet; a multi-layered approach is best. For instance, a platform might combine digital watermark checks, physiological feature analysis, and blockchain-based content provenance tracking to create a robust defense system. While detection alone cannot stop all malicious activity, it serves as a foundational pillar in the overall strategy to combat synthetic manipulation. Even the most sophisticated detection technologies will falter if the general public remains unaware of the threat. Education campaigns—run by governments, NGOs, and tech companies—can teach people how to spot potential deepfakes. These initiatives might include: One of the most frequent objections to regulating and labeling AI-generated media pertains to free speech. Critics argue that mandatory labeling could impede creative expression, from artists experimenting with generative art to filmmakers using AI for special effects. They worry that an overly broad or poorly defined regulatory framework may chill innovation and hamper the legitimate uses of synthetic media. However, these concerns can be addressed through nuanced policies. For instance, requiring an “AI-Generated” watermark does not necessarily stifle the creative process; it merely informs the audience about the content’s origin. The difference between legitimate creativity and malicious manipulation lies in transparency and intent. If creators are upfront about their manipulations, they still retain the freedom to innovate while respecting the public’s right to be informed. Another valid concern is that legislation aiming to curb malicious deepfakes could become a vehicle for authoritarian regimes to clamp down on free speech. Leaders could exploit the label of “synthetic media” to discredit genuine evidence of human rights abuses, or to justify mass censorship. This underscores the need for international standards accompanied by oversight mechanisms that ensure labeling requirements and takedown policies are not abused. To prevent overreach, any law targeting synthetic media should be transparent, narrowly tailored, and subject to judicial review. Multi-stakeholder input—from civil liberties groups, academic experts, industry representatives, and everyday citizens—can help craft legislation that balances public protection with fundamental human rights. Regulation in the realm of AI-generated media sits at the intersection of civil liberties and public welfare. The dilemma is not dissimilar to debates around hate speech or misinformation. While societies must preserve the right to free expression, they also have an obligation to protect citizens from harm. AI-generated media, when weaponized, can be as harmful as defamatory propaganda or incitement of violence, meriting its own set of safeguards. A measured approach ensures that policies serve their intended purpose—helping citizens distinguish truth from fabrication—without morphing into tools of repression. A transparent labeling requirement, combined with a legal framework that penalizes malicious intent, can maintain this balance. In effect, it draws a line between permissible creative uses of AI and the reckless endangerment of public trust. Regulations and labeling initiatives that work in one cultural or linguistic context may not translate seamlessly elsewhere. For instance, text overlays in English may fail to inform audiences in countries where English is not widely spoken. Additionally, cultural norms around privacy, free speech, and state authority vary widely. A labeling system that is accepted in one area might be viewed skeptically in regions with stronger censorship regimes or different legal traditions. Moreover, the very concept of “free speech” is not uniform across the globe. Some countries already have strong hate speech or misinformation laws, while others may lack the legal infrastructure to implement new regulations. Therefore, any international effort to standardize labeling must incorporate local adaptations, ensuring that the underlying principle of transparency remains intact, but is delivered in culturally and linguistically appropriate forms. Despite these variations, certain universal principles can guide the global approach to regulating AI-generated media: : Whether through text overlays, digital watermarks, or disclaimers, the public must be made aware when they are viewing synthetic media. The precise methods for delivering this information can be adapted locally, but the underlying principle should remain consistent. : Creators and distributors of synthetic media have a responsibility to ensure that viewers or listeners have enough information to make informed judgments about content authenticity and its context relative to reality. This is especially crucial when real human images, voices, or personal data are manipulated. : Governments, platform operators, and creators should be held accountable for failing to meet established guidelines. Where malicious intent is proven, legal mechanisms must be in place to enforce sanctions. Where ignorance or technical limitations lead to unintentional violations, a tiered system of penalties or corrective measures might be more appropriate. Deepfake technology is not confined to national borders; malicious actors often operate on a global scale. Consequently, international collaboration is essential. Just as nations have come together to form treaties on cybercrime, chemical weapons, and other cross-border threats, a similar multilateral framework could address the proliferation of AI-generated disinformation. A global body—potentially an offshoot of organizations like the United Nations Educational, Scientific and Cultural Organization (UNESCO) or the International Telecommunication Union (ITU)—could help establish best practices, offering guidance on policy, detection tools, and public education. While enforcement would likely remain at the national level, international oversight could encourage consistency, reduce regulatory loopholes, and mobilize resources for less technologically advanced nations. AI-generated media is a double-edged sword. It opens possibilities for unprecedented creative applications, from hyper-realistic film productions to empathetic storytelling experiences that place audiences in different worlds or historical eras. Education could become more immersive, activism more compelling, and art more provocative. Yet these constructive ends are overshadowed by the grave potential for harm—sowing social discord, undermining electoral processes, discrediting legitimate reporting, and exacerbating societal biases. The psychological underpinnings that make deepfakes so effective—our inherent trust in sensory data, coupled with cognitive biases like the illusory truth effect—underscore the urgency of swift action. Without explicit labeling, accountability frameworks, and educational programs, AI-manipulated content will further erode public consensus on reality. In communities already rife with political or ideological fault lines, the infiltration of advanced deepfakes could tip the balance toward conflict or, at the very least, deepen existing fractures. Regulation and labeling standards stand as our first line of defense. Text overlays, digital watermarks, platform-based disclaimers, and multi-layered detection systems can help restore at least a measure of trust. Legislation, if carefully crafted, can deter malicious actors by raising the legal and moral stakes. Global collaboration and cultural sensitivity will be necessary to ensure that these measures neither hamper legitimate creativity nor become tools for repression. In many ways, the fight against unregulated synthetic media is part of the broader struggle to preserve truth, accountability, and informed democratic governance in a digital world. Failing to act immediately risks normalizing an environment where fabricated evidence permeates public discourse, institutions lose credibility, and citizens retreat into isolated echo chambers of misaligned “facts.” By contrast, a robust system of labeling, legislation, and public awareness can provide the bulwark we need against a future where the line between truth and fabrication is hopelessly blurred. It is now, at this critical juncture, that we must institute comprehensive and enforceable regulations for AI-generated media. In doing so, we safeguard not only our political systems, social cohesion, and individual reputations, but also the very concept of shared reality. If we respond adequately and swiftly, we may harness the wonders of AI-driven creativity while ensuring that the cornerstone of civil society—our trust in what we see and hear—remains intact. Alex Cooke is a Cleveland-based portrait, events, and landscape photographer. He holds an M.S. in Applied Mathematics and a doctorate in Music Composition. He is also an avid equestrian.Canadian Prime Minister Justin Trudeau's government on Monday survived a third vote of no confidence in as many months, brought by his main Tory rival. The minority Liberal government got the support of the New Democratic Party (NDP), a small leftist faction once aligned with the ruling Liberals, to defeat the motion 180-152. The text of the proposition echoed NDP leader Jagmeet Singh's own past criticisms of Trudeau since breaking off their partnership in late August, calling him "too weak, too selfish." Neither Singh nor Trudeau were present for the vote. The House of Commons has been deadlocked most of this fall session by an unprecedented two-month filibuster by the Conservatives. But Speaker Greg Fergus, in a rare move, ordered a short break in the deadlock to allow for this and other possible confidence votes, and for lawmakers to vote on a key spending measure. MPs are scheduled to vote Tuesday on the spending package, which includes funds for social services, disaster relief and support for Ukraine. With a 20-point lead in polls, Conservative leader Pierre Poilievre has been itching for an election call since the NDP tore up its coalition agreement with the Liberals. But the NDP and other opposition parties, whose support is needed to bring down the Liberals, have so far refused to side with the Conservatives. Two no-confidence votes brought by the Tories in September and October failed when the NDP and the separatist Bloc Quebecois backed the Liberals. In Canada's Westminster parliamentary system, a ruling party must hold the confidence of the House of Commons, which means maintaining support from a majority of members. The Liberals currently have 153 seats, versus 119 for the Conservatives, 33 for the Bloc Quebecois, and the NDP's 25. Trudeau swept to power in 2015 and has managed to hold on through two elections in 2019 and 2021. amc/bs/bjt

New Orleans Saints quarterback Derek Carr sustained a left hand injury and possible concussion in the fourth quarter of Sunday's 14-11 victory over the New York Giants. The Saints feared Carr fractured the hand, per reports, and he was slated to undergo further testing. He reportedly had a cast on the hand when exiting the stadium. Saints interim coach Darren Rizzi said Carr may have to enter the concussion protocol. Carr was injured when he tried to leap for a first down late in the final quarter. He was near the sideline and went airborne, landing hard on the left hand with this face then slamming into the turf as he landed out of bounds with 3:59 left in the game. Jake Haener finished up the game for the Saints. Carr completed 20 of 31 passes for 219 yards, one touchdown and one interception for New Orleans. Overall, Carr has passed for 2,145 yards, 15 touchdowns and five interceptions this season. He missed three games earlier this season due to an oblique injury. --Field Level Media

US President-elect Donald Trump says on his first day in office he will pardon rioters involved in the January 6, 2021 Capitol attack, further building expectations for a broad granting of clemency. or signup to continue reading "I'm going to be acting very quickly, first day," Trump said on NBC News' Meet the Press with Kristen Welker on Sunday when asked when he planned to pardon his supporters who were charged in the attack aimed at overturning his 2020 election defeat. Trump told Welker there could be "some exceptions" to his pardons if the individuals had acted "radical" or "crazy" during the assault, which left more than 140 police officers injured and led to several deaths. But Trump described the prosecutions of his supporters as inherently corrupt and did not rule out pardoning the more than 900 defendants who had already pleaded guilty, including those accused of acting violently in the attack. "I'm going to look at everything. We're going to look at individual cases," Trump said. The comments - Trump's most detailed on the issue of pardons since he defeated Vice President Kamala Harris - will likely add to already high expectations for broad action once Trump is sworn in to office on January 20. "He continues to put out the public message closer and closer to what the J6 community is asking for, which is clemency for all of the January 6ers," Suzzanne Monk, a longtime advocate for defendants charged in the riot, told Reuters. Hopes among January 6 defendants and their supporters for broad-based clemency have been growing over the past week after President Joe Biden pardoned his son Hunter, marking a reversal from his pledge not to interfere with his son's criminal cases. Biden said Hunter deserved a pardon because he was the victim of political persecution, an argument Trump will likely use to justify mass pardons. Some Biden critics said his decision would lessen the political cost for Trump. In what has been billed as America's largest-ever criminal investigation, at least 1572 defendants have been charged in the January 6 attack, with crimes ranging from unlawfully entering restricted grounds to seditious conspiracy and violent assault. More than 1251 have been convicted or pleaded guilty and 645 have been sentenced to prison, with punishments ranging from a few days to 22 years, according to the latest data from the Justice Department. DAILY Today's top stories curated by our news team. WEEKDAYS Grab a quick bite of today's latest news from around the region and the nation. WEEKLY The latest news, results & expert analysis. WEEKDAYS Catch up on the news of the day and unwind with great reading for your evening. WEEKLY Get the editor's insights: what's happening & why it matters. WEEKLY Love footy? We've got all the action covered. WEEKLY Every Saturday and Tuesday, explore destinations deals, tips & travel writing to transport you around the globe. WEEKLY Going out or staying in? Find out what's on. WEEKDAYS Sharp. Close to the ground. Digging deep. Your weekday morning newsletter on national affairs, politics and more. TWICE WEEKLY Your essential national news digest: all the big issues on Wednesday and great reading every Saturday. WEEKLY Get news, reviews and expert insights every Thursday from CarExpert, ACM's exclusive motoring partner. TWICE WEEKLY Get real, Australia! Let the ACM network's editors and journalists bring you news and views from all over. AS IT HAPPENS Be the first to know when news breaks. DAILY Your digital replica of Today's Paper. Ready to read from 5am! DAILY Test your skills with interactive crosswords, sudoku & trivia. Fresh daily! Advertisement AdvertisementBy Conor Ryan The Red Sox have signaled all offseason that they’re willing to spend heavily. Beyond team president Sam Kennedy confirming Boston’s interest in Juan Soto and the team’s readiness to exceed MLB’s competitive balance tax (CBT) to bring in top talent, multiple reports have tied the Red Sox to some of the top names in both free agency and on the trade market. But if Boston wants to win the high-stakes sweepstakes for stars like Soto, Corbin Burnes, and Max Fried, the Red Sox might have to significantly outbid some other deep-pocketed teams like the Yankees. Why? According to longtime ESPN baseball writer Buster Olney, the Red Sox’ status as a top destination for players has waned in recent years — especially after the team dealt Mookie Betts to the Dodgers in February 2020. “One market factor that shifts cyclically is how some teams become a preferred destination for players, while other teams lose ground in the perception game,” Olney posted on X. “Boston is aggressive with dollars now, but the Red Sox will have to pay extra to overcome a negative player perception that really started growing when the team wouldn’t pay Mookie Betts.” One market factor that shifts cyclically is how some teams become a preferred destination for players, while other teams lose ground in the perception game. Boston is aggressive with dollars now, but the Red Sox will have to pay extra to overcome a negative player perception that... The Red Sox’ decision to move on from Betts has been nothing short of a disaster for Boston. Connor Wong the only remaining player still on Boston’s roster from that deal with the Dodgers. Meanwhile, Betts has gone on to win two World Series titles, four Silver Sluggers, and two Gold Gloves over the last five seasons in Los Angeles. Beyond Betts’ exit, the Red Sox have let other homegrown stars like Xander Bogaerts walk in free agency — with Boston prioritizing internal development over active offseasons. The result has granted Boston one of the top farm systems in baseball, but little to show for it at the big-league level so far. Boston has only punched its ticket to the postseason once over the last six seasons. Add in the Red Sox’ quiet offseason in 2023 after promises of a “full throttle” approach , and Boston might have some work to do when it comes to re-establishing itself as a major player in free agency. The Red Sox have already lost out on two potential pitching targets in Blake Snell and Yusei Kikuchi — who signed with the Dodgers and Angels, respectively, this week. Conor Ryan Conor Ryan is a staff writer covering the Bruins, Celtics, Patriots, and Red Sox for Boston.com, a role he has held since 2023. Sign up for Red Sox updates⚾ Get breaking news and analysis delivered to your inbox during baseball season. Be civil. Be kind.None

FLORHAM PARK, N.J. (AP) — New York Jets running back Breece Hall could play Sunday at Jacksonville after missing a game with a knee injury. Hall has been dealing with a hyperextension and injured MCL in his left knee that sidelined him last Sunday at Miami. But he was a full participant at practice Friday after sitting out Wednesday and Thursday. Hall was officially listed as questionable on the team's final injury report. “He looks good right now,” interim coach Jeff Ulbrich said. “So it’s promising.” Hall leads the Jets with 692 yards rushing and four touchdown runs, and he also has 401 yards receiving and two scores on 46 catches. A pair of rookies helped New York offset Hall's absence last weekend, with Braelon Allen rushing for 43 yards on 11 carries, and Isaiah Davis getting 40 yards on 10 attempts and scoring his first rushing touchdown. “We’re hopeful and we’ll see how it goes,” Ulbrich said of Hall. The Jets will get star cornerback Sauce Gardner back after he missed a game with a hamstring injury, but New York's secondary appears likely to be without cornerback D.J. Reed because of a groin injury. Reed was listed as doubtful after he didn't practice Thursday or Friday. “It’s been something that’s kind of lingered here and there,” Ulbrich said. “It’s gotten aggravated and then it went away, and then it got aggravated again. So, it’s just dealing with that.” Backup Brandin Echols is out with a shoulder injury, so veteran Isaiah Oliver or rookie Qwan'tez Stiggers could get the start opposite Gardner if Reed can't play. Kendall Sheffield also could be elevated from the practice squad for the second game in a row. Ulbrich said kick returner Kene Nwangwu will be placed on injured reserve after breaking a hand last weekend at Miami. The injury came a week after he was selected the AFC special teams player of the week in his Jets debut, during which he returned a kickoff 99 yards for a touchdown and forced a fumble in a loss to Seattle. “To put him out there with a broken hand, just thought it’d be counterproductive for him and for us as a team, so it unfortunately cuts the season short and what a bright light he was,” Ulbrich said. “What an amazing future I think he has in this league. With saying that, he’s already been a really good player for quite a while, so (it's) unfortunate, but he’ll be back.” Offensive lineman Xavier Newman (groin) is doubtful, while right guard Alijah Vera-Tucker (ankle) and RT Morgan Moses (wrist) are questionable. AP NFL: https://apnews.com/hub/NFL

0 Comments: 0 Reading: 349