Gayathri Kallukaran is a Junior Journalist with Eastern Eye. She has a Master’s degree in Journalism and Mass Communication from St. Paul’s College, Bengaluru, and brings over five years of experience in content creation, including two years in digital journalism. She covers stories across culture, lifestyle, travel, health, and technology, with a creative yet fact-driven approach to reporting. Known for her sensitivity towards human interest narratives, Gayathri’s storytelling often aims to inform, inspire, and empower. Her journey began as a layout designer and reporter for her college’s daily newsletter, where she also contributed short films and editorial features. Since then, she has worked with platforms like FWD Media, Pepper Content, and Petrons.com, where several of her interviews and features have gained spotlight recognition. Fluent in English, Malayalam, Tamil, and Hindi, she writes in English and Malayalam, continuing to explore inclusive, people-focused storytelling in the digital space.
Vera C Rubin Observatory in Chile releases first celestial image
Home to the world’s most powerful digital camera, it will film the night sky for a decade
Detected over 2,000 asteroids in 10 hours — a record-breaking feat
Aims to uncover dark matter, map the Milky Way and spot potential threats to Earth
UK plays key role in data analysis and processing
Powerful new eye on the universe
The Vera C Rubin Observatory, located atop Cerro Pachón in the Chilean Andes, has released its first image, a vibrant snapshot of a star-forming region 9,000 light years from Earth. The observatory, home to the world’s most powerful digital camera, promises to transform how we observe and understand the universe.
Its first observations signal the start of a decade-long survey known as the Legacy Survey of Space and Time, which will repeatedly capture wide-field images of the southern night sky.
Unprecedented detection power
In just 10 hours of observation, the telescope identified 2,104 previously unknown asteroids and seven near-Earth objects, a rate that surpasses what most global surveys find in an entire year. This capacity highlights the observatory’s potential to detect celestial objects that may otherwise go unnoticed, including potentially hazardous asteroids.
One of the telescope's strengths is its consistency. By imaging the same areas every few nights, it can identify subtle changes and transient events in the cosmos, such as supernovae or asteroid movements, and instantly alert scientists worldwide.
A feat of engineering
The telescope’s design includes a unique three-mirror system that captures and focuses light with remarkable clarity. Light enters via the primary mirror (8.4 metres), reflects onto a secondary mirror (3.4 metres), then onto a tertiary mirror (4.8 metres), before reaching the camera. Each surface must remain spotless, as even a speck of dust could distort the data.
The observatory’s remote location was chosen for its high altitudeRubinObs
The camera itself is an engineering marvel. Measuring 1.65 by 3 metres and weighing 2,800 kilograms, it boasts 3,200 megapixels, 67 times more than the iPhone 16 Pro. A single image would need 400 Ultra HD TV screens to display in full. It captures one image roughly every 40 seconds for up to 12 hours a night.
This design allows the observatory to see objects that are extremely distant, and thus from much earlier periods in the universe’s history. As commissioning scientist Elana Urbach explained, this is key to “understanding the history of the Universe”.
Protecting the dark sky
The observatory’s remote location was chosen for its high altitude, dry air, and minimal light pollution. Maintaining complete darkness is critical; even the use of full-beam headlights is restricted on the access road. Inside, engineers are tasked with eliminating any stray light sources, such as rogue LEDs, to protect the telescope’s sensitivity to faint starlight.
UK collaboration and scientific goals
The UK is a key partner in the project, with several institutions involved in developing data processing centres that will manage the telescope’s enormous data flow, expected to reach around 10 million alerts per night.
British astronomers will use the telescope to address fundamental questions about the universe. Professor Alis Deason at Durham University says the Rubin data could push the known boundaries of the Milky Way. Currently, scientists can observe stars up to 163,000 light years away, but the new telescope could extend that reach to 1.2 million light years.
She also hopes to examine the Milky Way’s stellar halo, a faint region made up of remnants from dead stars and galaxies, as well as elusive satellite galaxies that orbit our own.
Looking for Planet Nine
Among the more intriguing missions is the search for the mysterious Planet Nine. If the proposed ninth planet exists, it is thought to lie up to 700 times the distance from Earth to the Sun — too far for most telescopes to detect. Scientists believe the Vera Rubin Observatory may be powerful enough to confirm or refute its existence within its first year.
A new era in astronomy
Professor Catherine Heymans, Astronomer Royal for Scotland, described the release of the first image as the culmination of a 25-year journey. “For decades we wanted to build this phenomenal facility and to do this type of survey,” she said. “It’s a once-in-a-generation moment.”
With its unmatched ability to capture deep-sky imagery and monitor celestial motion over time, the Vera Rubin Observatory is expected to reshape our understanding of the universe, from dark matter to planetary defence.
“It’s going to be the largest data set we’ve ever had to look at our galaxy with,” says Prof Deason. “It will fuel what we do for many, many years.”
New AI-powered Copilot 3D tool converts 2D images into 3D models in seconds.
Available for free to some users via Copilot Labs with Microsoft or Google account sign-in.
Models can be exported in GLB format for use in 3D viewers, tools, and AR applications.
Launch follows Microsoft’s recent introduction of GPT-5-powered Smart Mode in Copilot.
Microsoft has launched Copilot 3D, an artificial intelligence tool that converts standard images into 3D models within seconds. The feature, part of Copilot Labs, is currently free for a subset of users and comes a day after the introduction of GPT-5-powered Smart Mode, reflecting the company’s growing integration of AI into creative and design workflows.
How Copilot 3D works
Copilot 3D allows users to upload PNG or JPG images under 10MB in size. Once an image is uploaded, clicking the “Create” button prompts the AI to produce a 3D model within a few seconds to a minute. The resulting files can be downloaded in GLB format, which is supported by most 3D viewers, design tools, and game engines.
Early testing reported by The Verge suggests the tool performs best with objects such as furniture or everyday items but may be less accurate with animals or more complex forms.
Access and storage
The feature is designed for desktop browsers. Users can visit Copilot.com, open the sidebar, navigate to “Labs,” and select “Try now” under Copilot 3D. Generated models are stored for 28 days on a “My Creations” page, allowing time for download and export to augmented reality applications.
Limitations and usage guidelines
Microsoft advises using images with clear separation between subject and background for optimal results. Current support is limited to PNG and JPG formats, but the company may expand compatibility in future updates.
Users must only upload images they own the rights to and avoid submitting photos of people. Accounts may be suspended for violations, and illegal content will be automatically blocked. Microsoft has stated that user-generated 3D models will not be used to train its AI systems.
Target users and applications
Copilot 3D is aimed at rapid prototyping, concept testing, and education — areas where conventional 3D modelling software can be time-consuming or technically demanding. Analysts believe it could appeal to sectors such as game development, product design, and teaching, where demand for 3D assets is high.
By lowering the technical barrier, Microsoft is positioning the tool for professional creators, hobbyists, and learners who wish to experiment with 3D content without mastering complex programmes such as Blender or Autodesk Maya.
Part of Microsoft’s wider AI expansion
The release follows Microsoft’s integration of GPT-5-powered Smart Mode into Copilot, enabling more context-aware AI interactions. The consecutive launches demonstrate the company’s aim to make Copilot a multi-functional platform for productivity, creativity, and design.
By clicking the 'Subscribe’, you agree to receive our newsletter, marketing communications and industry
partners/sponsors sharing promotional product information via email and print communication from Garavi Gujarat
Publications Ltd and subsidiaries. You have the right to withdraw your consent at any time by clicking the
unsubscribe link in our emails. We will use your email address to personalize our communications and send you
relevant offers. Your data will be stored up to 30 days after unsubscribing.
Contact us at data@amg.biz to see how we manage and store your data.
OpenAI officially launched GPT-5 during a live-stream, promoting it as a major AI advancement
GPT-5 replaces previous models including GPT-4o, removing user access to the model picker
Users report shorter responses, reduced personality, and restricted prompt usage
Online forums are filled with frustration, with many calling it a downgrade
OpenAI has launched its highly anticipated GPT-5 model, announcing the rollout during a live-streamed event. CEO Sam Altman described it as a significant leap in AI development, comparing its capabilities to that of a PhD-level expert across multiple disciplines.
According to OpenAI, GPT-5 offers improvements in reasoning, writing, coding, factual accuracy, and handling of health-related queries, while exhibiting fewer hallucinations — a term used to describe when AI makes false or fabricated claims.
However, the rollout has also triggered a wave of backlash, as OpenAI has removed access to previous models such as GPT-4o, GPT-4 mini (o4 mini), and GPT-3.5 (o3). The model picker option within ChatGPT has been eliminated, preventing users — including paying subscribers — from choosing which version they interact with.
Instead, OpenAI now uses a routing system to automatically determine whether the base or reasoning model of GPT-5 should respond to a user’s query. While the intention is to streamline the experience, many users feel the change has led to a degraded product.
Online reaction: Shorter replies, lost personality, and disappointment
While GPT-5 aims to offer a smarter and more efficient AI experience, early user feedback paints a different picture. Across Reddit and other social platforms, complaints have centred on shorter responses, robotic tone, and a lack of emotional nuance compared to previous models.
“They have completely ruined ChatGPT. It’s slower, even without the thinking mode. It gives such short replies and gets some of the most basic things wrong,” wrote one Reddit user. “It also doesn’t follow instructions and just does whatever it wants.”
Another user suggested the changes were financially motivated:
“They shortened the answers to save costs. Removed emotional intelligence so people stop chatting all day. But this will probably cost them millions in lost subscriptions.”
Other users echoed similar sentiments:
“It doesn’t have the same vibe as 4o. While more organised in some ways, the replies are clipped and lack warmth.”
“Answers are shorter and, so far, not any better than previous models. Combine that with more restrictive usage, and it feels like a downgrade branded as the new hotness.”
Perhaps the most emotive comment came from a user mourning the removal of previous models:
“I really feel like I just watched a close friend die.”
A changing AI landscape
The rollout of GPT-5 marks another step in OpenAI’s strategy to unify its offerings under one flagship model. While the company insists the changes are meant to improve user experience through smarter automation, the loss of model choice and perceived dip in conversational quality have clearly hit a nerve among long-term users.
It remains to be seen whether future updates will address the backlash, or whether GPT-5 will evolve into the all-in-one solution OpenAI envisions — without sacrificing the personality that once defined its predecessors.
Keep ReadingShow less
The update features Power Rangers, Halo Spartans, and a new insect invasion
Fortnite Chapter 6, Season 4 begins on Thursday, 7 August with the new “Shock and Awesome” theme.
The update features Power Rangers, Halo Spartans, and a new insect invasion.
Server downtime begins between 1:30 AM and 2:00 AM EDT.
Expected downtime is 2–6 hours before the update becomes available.
Exact release times vary by region (full list below).
Epic Games has confirmed that Fortnite Chapter 6, Season 4 – titled Shock and Awesome – will launch globally on Thursday, 7 August 2025, bringing a mix of new collaborations and original content.
The new season introduces a unique enemy threat in the form of an insect invasion, a first for the game. Past seasons have included zombies and mercenaries, but this is the first time players will encounter bug-based enemies. The update also features crossover content from Halo and Power Rangers, as part of the ongoing expansion of Fortnite’s multiverse.
Key collaborations: Halo, Power Rangers and Megazord
Among the featured additions this season are:
Halo Spartans (unlockable via the Battle Pass, with some content tied to Level 100).
Six Power Rangers, including the Green Ranger, with the core five available later via the in-game store.
Megazord, expected to appear later in the season, not at launch.
Speculation continues around further crossover possibilities, with some players anticipating tie-ins with Solo Leveling or Helldivers, though nothing has been officially confirmed.
Exact regional release times
Fortnite servers will go offline early on 7 August between 1:30 AM and 2:00 AM EDT, marking the end of Chapter 6, Season 3. Server downtime typically lasts between 2 and 6 hours. Based on that window, here are the estimated release times for Season 4:
North America (PDT): 3:00 AM – 8:00 AM
North America (EDT): 6:00 AM – 11:00 AM
Brazil (BRT): 7:00 AM – 12:00 PM
United Kingdom (BST): 11:00 AM – 4:00 PM
Western Europe (CEST): 1:00 PM – 5:00 PM
Japan (JST): 7:00 PM – 12:00 AM
Australia (AEST): 8:00 PM – 1:00 AM
Players can expect access to the new season once the maintenance period concludes, though timing may vary depending on update size and server load.
Keep ReadingShow less
The platform has opted to keep shared content distinct from original posts
Instagram now allows users to repost their friends’ public photos and Reels.
Reposted content appears in a dedicated tab, not on the main profile grid.
A new ‘friends feed’ will highlight posts liked or commented on by users’ friends.
Privacy controls allow users to manage visibility of likes and comments.
The update is currently live in the US and rolling out globally.
Instagram rolls out reposting feature for public posts and Reels
Instagram has introduced a long-requested reposting feature that enables users to share their friends’ public photos and Reels directly on their own profiles. While the functionality mirrors what has long been available on platforms like X (formerly Twitter) and TikTok, Instagram has placed some limitations on how reposts are displayed.
Instead of appearing on a user’s main profile grid, reposted content is shown in a separate, dedicated tab. This new capability expands on the limited reposting options previously available for Stories and Reels, and marks an effort by the platform to make sharing within the app more intuitive.
Feature still comes with limitations
Despite the addition of Instagram reposting, the platform has opted to keep shared content distinct from original posts. Users will not be able to include reposted images in their main profile feed — a decision likely aimed at preserving the visual cohesion many users curate on their profiles.
Additionally, only public content can be reposted, and the feature may not be fully available in all regions just yet.
Friends-focused Reels feed introduced
In a further update, Instagram has launched a new Reels feed focused entirely on content from friends. This allows users to browse Reels that their friends have posted, liked, or commented on — replicating a feature long present on TikTok, from which Instagram originally adapted the Reels format.
The ‘friends feed’ will highlight this activity automatically, but users retain the ability to limit what others can see. Instagram confirmed that individuals can disable the display of their likes and comments in the feed or mute interactions from specific users.
Context and global rollout
Instagram originally launched Reels in 2020 in response to the growing popularity of TikTok, and the format has since become central to its strategy. The latest changes aim to present users with more relevant content — particularly from friends — amid ongoing criticism of Instagram’s algorithmically driven main feed.
The company has also taken steps to downplay political content and refocus attention on personal interactions. The new features, including Instagram reposting and the friends Reels feed, are currently available in the United States and are being rolled out to other regions in the coming weeks.
Keep ReadingShow less
The model creates lifelike virtual environments from simple text prompts
Google DeepMind introduces Genie 3, a new AI world model for training robots and autonomous systems
Model generates interactive, physics-based simulations from simple text prompts
Genie 3 could support the development of artificial general intelligence (AGI)
The tool is not yet available to the public and comes with technical limitations
Simulations could be used to train warehouse robots, autonomous vehicles, or offer virtual experiences
Google has revealed a new AI system called Genie 3, which it claims is a major advance towards developing artificial general intelligence (AGI). The model creates lifelike virtual environments from simple text prompts and could be used to train AI agents for real-world tasks, particularly in robotics and autonomous navigation.
Developed by Google DeepMind, Genie 3 enables AI systems to interact with realistic, physics-based simulations of the real world—such as warehouses or mountainous terrains. The company believes that these world models are a critical part of building AGI, where machines can perform a wide range of tasks at a human level.
How Genie 3 works
Genie 3 allows users to generate interactive virtual scenes by typing natural language prompts. These simulations can then be manipulated in real time—for instance, a user could ask for a herd of deer to appear on a ski slope or alter the layout of a warehouse.
The visual quality of the scenarios is comparable to Google’s Veo 3 video generation model, but the key difference is that Genie 3’s simulations can last for minutes, offering real-time interaction beyond Veo 3’s short video clips.
So far, Google has demonstrated examples of skiing and warehouse environments to journalists, but has not made Genie 3 available to the public. No release date has been given, and the company acknowledged the model has a number of limitations.
Why it matters for AGI
Google says Genie 3 and other world models will be vital in developing AI agents—systems capable of acting autonomously in physical or virtual environments. While current large language models are good at tasks like planning or writing, they are not yet equipped to take action.
“World models like Genie 3 give disembodied AI a way to explore and interact with environments,” said Andrew Rogoyski of the Institute for People-Centred AI at the University of Surrey. “That capability could significantly enhance how intelligent and adaptable these systems become.”
Applications in robotics and virtual training
The real-time, physics-based nature of Genie 3’s simulations makes them ideal for training robots or autonomous vehicles. For example, a robot could be trained in a virtual warehouse—interacting with human-like figures, avoiding collisions, and handling objects—all before being deployed in a physical setting.
Professor Subramanian Ramamoorthy, Chair of Robot Learning and Autonomy at the University of Edinburgh, said: “To achieve flexible decision-making, robots need to anticipate the consequences of different actions. World models are extremely important in enabling that.”
Broader industry competition
Google’s announcement comes as competition intensifies in the AI industry. Just days earlier, OpenAI CEO Sam Altman shared what appeared to be a teaser of GPT-5, the next major language model from the makers of ChatGPT.
While OpenAI and Google compete in developing advanced LLMs (large language models), world models like Genie 3 add a new dimension by allowing AI systems to perceive, act and learn from interactions in simulated spaces—not just process text.
What's next?
Alongside Genie 3, Google has also built a virtual agent named Sima, which can carry out tasks within video games. Though promising, neither Sima nor Genie 3 is available to the public at this stage.
A research note accompanying Sima last year stated that language models are good at planning, but struggle to take action—a gap that world models could help bridge. Google says it expects such models to play “a critical role” as AI agents become more embedded in the real world.