Self Driving Car: Tesla

Where are self driving cars. Why Elon Musk keeps failing on his promise of complete self driving Cars?

Self driving cars are the hot new thing and every major tech company as well as automobile company is working on their iteration of this not so new concept. These companies knows that there is a huge amount of interest in self driving cars. Many people feel the job of driving is a very tedious and tiresome one. Using an AI computer to drive the system takes kills the need for a human to be occupied in a non-productive job of driving. It allows the rider to rather focus on other important tasks or simply relax while the car takes you to the required location.

In 2015, Elon Musk, the billionaire Tesla CEO claimed: By 2018, Tesla’s cars would have achieved “full autonomy,” or “Full Self-Driving” in Tesla jargon. Musk has made multiple claims since. Musk also announced a Robo-Taxi program where Tesla cars would serve as autonomous taxies which would function without any drivers. Tesla and Elon have failed to deliver on that promise repeatedly. While Tesla seems to be closest to achieving a completely autonomous system. 

roberto nickson Ddjl0Cicdr4 unsplash edited Self Driving

Even tech companies like Apple and Google have been working on building autonomous cars. Since 2014 Apple is working on Project Titan. The very mysterious project is basically Apple trying to build a complete vehicle with a full self driving technology at its core. Google has a subsidiary it calls Waymo working on developing the core technologies that allow autonomous cars. 

Yet the biggest companies with the deepest pockets and the brightest brains are still not able to build a flawless, full self driving automobile. The problem with these systems is not about lack of sufficient data for a machine learning model to learn from. Automobile companies, Tesla in particular have been collecting the driving data from hundreds of thousands of cars all over the world. They have the required raw data to train the AI. 

The problem is not lac of sufficient hardware resource in an individual car, the processors that drive these cars are definitely more than powerful to enable full self driving. Autonomous cars do not need a huge memory as the decision taken by the AI system does not depend on what the vehicle had in the memory 5 minuets ago. The computer driving the vehicle needs the data of surrounding at the given moment. Pedestrians, wild life, and other vehicles come and go within matter of seconds and once they have passed that data is no longer needed in the memory. 

With no major hardware issue in the way we are much closer to full self driving cars that ever, but still a human intervention is needed every now and then. We have successfully achieved what is called level 3 autonomy but the jump to a level 5 autonomy is years away. 

Here are the levels of autonomy for self driving cars.

  1. No automation : The car does not have an AI system assisting the driver in any kind. The car may have a constant speed cruise control at most.
  2. Driver assistance: The car has adaptive cruise control or lane-keeping technology which makes the car cruise through the specified lane but the driver still needs to do most of the driving.
  3. Partial automation : The car can keep a safe distance and follow the route, but the driver must be ready to take control whenever necessary.
  4. Conditional automation : The car can drive by itself in some situations. Although a driver is rarely required, he or she must be ready to respond at all times. Teslas currently posses a level 3 autonomy where they can drive all by themselves but a driver is still needed behind the wheel.
  5. High automation : When travelling on a controlled route, a driver is not required. The driver may choose to snooze in the back. A driver is still required on other roads. 
  6. Full automation : There is no need for a driver in a car. It’s possible that the car won’t even have steering wheels or pedals. The driver can go back and rest because the automobile has everything under control.

Autonomous driving has faced plenty of criticism due to various minor as well as major accidents that have occurred with vehicles with this current gen technology. For most of them it was a human error that caused the accident and not the AI. But there have been cases where the AI failed to detect the obstacle and caused a mishap. 

The problem with todays level of autonomy is that a human is required behind the wheel, ready to take over whenever needed. For most of the part humans start relying too much on the AI and loose attention. Humans are pretty bad at being alert after spending hours just looking at the road not doing anything. 

To go beyond the level of autonomy we have already achieved, we might need to compromise and create controlled routes (To achieve a level 4 autonomy) or build AI systems much more flexible and advanced than what we already have. 

History of the smartphone

History of Smartphones: How the mobile phones evolved to the amazing modern age smartphones?

It took smartphones more than two decades to become a highly sophisticated and essential piece of our daily life.

Mobile phones started large and bulky with very little functionality. They were a luxury that only a few could afford. Today we have more than 70% of the global population using affordable smartphones that also have processing power thousands of times more than what the first generation could provide. Today’s smartphones are millions of times faster than the computer that took humans to the moon.

Before smartphones

Mobile phones were a luxury that could only make calls and send text messages. There existed another category of devices called personal organizers or personal digital assistant (PDA), used for business purposes these had a digital calendar, you could take notes and write emails.

Mobile phones then could merely be used for calling and did not serve any other function due to lack of processing power.

1992: The First Smartphone

IBM the biggest computer company in the 90s and wanted to expand its business in its quest for new tech that could fit into the company’s portfolio they decided to take a computer and shrink it down so it fits in your pocket. This idea started the project which later would be termed as the first true modern-day smartphone.

Self Driving

IBM revealed the device that can be called the first smartphone in 1992, a revolutionary device that was way beyond its time. It had a touch screen, and various apps such as an address book, a calendar, an appointment scheduler, a calculator, a world time clock, an electronic notepad, handwritten annotations, etc. This revolutionary device IBM was called Simons Personal Communicator.

The term smartphone didn’t exist then, but the IBM Simons is the first mobile device advanced enough to be called the first modern age smartphone.

1999: The age of Blackberry

Self Driving

Blackberry launched its first hardware device the Blackberry 850 in the year 1999. The BlackBerry 850 did fall in the category of two-way pager. The Blackberry was the largest smartphone manufacturer of the time. The company’s mobile phones provided features that no other manufacturer could provide, email on the go with great wireless network connectivity. Blackberry also had an instant messenger called the BBM a long before Whatsapp and others were even an idea. The Blackberrys also had super secure connectivity and messaging protocols making it a great choice for those who were worried about their confidential data being stolen from their devices. These features took Blackberry in hands of almost every Business customer across the globe.

2001: Mobile and the Internet

It took almost a decade to get a proper 3G connection to smartphones, in 2001 a mobile protocol was built to allow mobile devices to connect with the internet wirelessly. This allowed various new applications and features but these were again not for everyone. The price of the devices had been down but the cost of data was wasn’t worth it for most. Later as the prices of data started going down more and more people started using mobile internet.

2007: Apple & Steve Jobs Changed the World

In the year 2007, Steve Jobs unveiled one of the most influential smartphones in history, A device that defined what the future should look like, a device that reinvented mobile phones, the Apple iPhone. The iPhone was the sleekest touchscreen device ever made, it featured a large glass screen and a single button in the front.

When Apple revealed the iPhone, it received criticism from other mobile phone manufacturers calling the iPhone a gimmick and a waste of money. The press referred to it as a gamble. Five months later when the device hit stores, people lined up outside Apple Stores to get their hands on the future.

Self Driving

“Every once in a while, a revolutionary product comes along that changes everything, Apple’s been very fortunate. It’s been able to introduce a few of these into the world. Well, today, we’re introducing three revolutionary products of this class. The first one is a wide-screen iPod with touch controls. The second is a revolutionary mobile phone. And the third is a breakthrough internet communications device. An iPod, a phone, and an internet communicator. An iPod, a phone … are you getting it? These are not three separate devices: This is one device, and we are calling it iPhone.” said Steve Jobs during the legendary keynote that gave the world the most influential piece of technology it ever had.

While the sales started to fall after most enthusiasts had bought the first iPhone, Apple realized that with so much screen real estate what apps could do was just limited by the imagination of the human brain. Apple launched the AppStore in 2008 one year after the iPhone launch giving users a way to install apps and go beyond what Apple provided. This new addition was one of the major reasons behind the iPhone being a massive hit in times to come.

Google follows with Android

While every major mobile phone company at the time was mocking Apple for the iPhone and its new touch screen keyboard and completely new user interface there was one company that knew Apple had made the future. Google was about to launch its first Android phone just a few months after the announcement of the Apple iPhone but seeing Apple unveil the iPhone on stage google knew it had to completely rebuild Android.

The device that almost made it to market was canceled and Google engineers got back to the drawing board, rethinking every aspect of its Operating System.

1*ft LlcE6Nyn51thiq4NtEg Self Driving

The HTC Dream released in 2008 became the first Android smartphone. The first Android gained criticism due to lacking features when compared to iOS and Blackberry but it was set to become one of the most famous operating systems in the world due to its open nature.

2021: Entire world just a click away

Today 14 years and 27 iPhones later, we can do almost everything with just a few clicks on the touchscreen. Today we do everything from reading news, chatting with friends and family, playing highly sophisticated games and listen to any song in the world, sending emails, and doing all sorts of business stuff.

The smartphone also gave rise to a new industry of apps that is now bigger than Hollywood. Smartphone along with cheaper mobile internet is now changing how everything works. It is changing entire industries and also building new industries such as ed-tech and video and music streaming.


Facebook is now Meta: New vision, new goals and a new name.

On Thursday, October 28th Facebook Inc’s CEO Mark Zuckerberg announced Meta which will be the parent company to Facebook, Instagram, WhatsApp, Oculus and other brands which were previously under the brand Facebook Inc. 

Meta Platforms Inc or Meta is the company’s way of saying it is no longer just a social media company but a company that is innovating in ways people communicate,  a company that is focused on building a metaverse. 

Meta Work

Also Read|The Facebook rebranding, what it really means for the company moving forward.

It is to be noted that the blue Facebook app is not the one to be renamed but Facebook Inc the company is now Meta Platforms inc. 

This vision of metaverse of the visionary company is about building a hybrid between the internet and real world, rather than just the 2D screens Internet will be built to expand in 3D in the real world using AR and VR technologies. This idea has no limits with how creative people can get considering technologically feasible. 

At the Connect 2021, the social technology company announced it will be investing a total of $150 million into training the next generation of creators to work with the AR, VR and the Metaverse. 

This new vision and new efforts are gaining to be implemented without changing any parts of the corporate structure of Facebook Inc. However the company will be reporting its financials on two separate operating segments: Family of Apps and Reality Labs. 

The company also announced that starting December 1 complementing the new name the stock ticker will be changed from FB to MVRS. 

The company had already began working on various lines of hardware to build the technologies required to build the metaverse. Through the likes of Oculus the company already has multiple VR headsets out in the public and also launching the Ray-Ban Stories glasses and a line of portable video-calling devices. 

Meta Gaming

“Our hope is that within the next decade, the metaverse will reach a billion people, host hundreds of billions of dollars of digital commerce, and support jobs for millions of creators and developers,” Zuckerberg wrote in a letter. 

Mark Zuckerberg and Facebook have made the move to tell the world augmented reality and virtual reality will be a key part in the business of the company moving forward with the new name.

Also Read| MacOS Monterey is live: Find out what’s new and is your Mac eligible for the upgrade.

MacBook Pro

Apple releases M1 based MacBook Pros: The M1 is going Pro.

On 18th of October at its “Unleashed” event Apple released its most awaited product of the year the Apple silicon based MacBook Pro. The new MacBook Pro series comes with bigger so-called up versions of the M1 SoC which the company calls M1 Pro and the top of the line M1 Max. The notebook is available in 14-inch and 16-inch display variants and will be the biggest update to the MacBook in the past couple of years where Apple almost messed up the Pro workflow.

The new M1 Pro and M1 Max are beefed up versions of the already excellent M1 and do allow upto 10-core CPU and 32-Core GPU with upto 64 GB of unified memory.

New Silicon

The Cupertino giant revealed the plan to transition from Intel to its own custom in-house silicon in 2020. The M1 which was Apple’s first custom SoC in a Mac was ARM based hence provided huge performance improvements while still consuming a lot less power.

Today the iPhone maker announced the M1 Pro and the M1 Max which are further supposed to increase performance using more high performance cores. 

MacBook Pro M1 Pro Max

The M1 Pro as well as M1 Max have a total of 10 CPU cores of which 2 are high-efficiency and 2 high 8 high-performance. Both the SoCs have a 16 core neural engine. And while M1 Pro has a maximum of 16 core GPU with 32 GB of unified memory, the M1 Max has 32 Core GPU and upto 64GB of unified memory.

The marketing charts and numbers given by Apple show that the M1 Pro and M1 Max show upto 1.7 times more power while still using upto 70% less power. The M1 Pro’s GPU is 2 times faster than the M1 while the M1 Max is 4 times faster than the M1 in GPU performance. 


The MacBook Pro 2021 has a completely new display panel now with a notch for the camera and thinner bezels. The Display itself if a MiniLED panel giving out a 1000 nits of full screen brightness, 1600 nits of peak brightness and 10,00,000:1 contrast ratio. The display now also supports Apple’s ProMotion or adaptive refresh rate with upto 120Hz. The notch at the top of the screen hosts a new 1080p FaceTime camera but despite the massive size lacks other sensors required for FaceID. 

Apple MacBook Pro Connectivity 10182021 Self Driving

The display supports the complete P3 wide colour gamut, HDR, and XDR output. 


In 2016 Apple gave the MacBook Pro a major redesign making the device thin light and portable and in the process got rid of a lot of very important ports. This time around Apple went back and brought back the HDMI port and the SD card slot along with three USB-C thunderbolt 4 ports. Bringing a relief to the professionals as they no longer have to carry around weird dongles for basic functionality. 

Apple MacBook Pro Ports 10182021 Self Driving

Apple has also brought back the MagSafe magnetic connecter for charging the device. However users can still use the USB-C port for charging the device similar to current devices. 

Apple MacBook Pro MagSafe 10182021 Self Driving

A Lot More

Connectivity options on the MacBook Pro include Bluetooth v5.0 and WiFi 6. The User can also connect upto 3 Pro Display XDR and a 4K TV simultaneously to the M1 Max based devices and 2 Pro Display XDRs and a 4k TV to the M1 Pro

Apple MacBook Pro Connectivity 10182021 Self Driving

The 14” model now provides a 17 hour battery life while the 16” model boasts a massive 21 hour battery life. The company also claims that the device now uses a newer thermal system that can move upto 50% more air even at lower fan speeds. 

New speaker system on these MacBook Pro models include 6 speakers of which 2 are tweeters while other 4 are force-cancelling woofers.

Apple claims that using its new M1 Max chip, pros can edit up to 30 streams of 4K ProRes video or up to seven streams of 8K ProRes video in Final Cut Pro — more streams than on a 28-core Mac Pro with Afterburner.

Apple MacBook Pro 16 inch Workflow 10182021 Self Driving

Both M1 Pro and M1 are also paired with a 16-core Neural Engine that helps enhance machine learning capabilities on the new MacBook Pro models. As per some numbers shared publicly, the built-in Neural Engine delivers up to 8.7 times faster object tracking performance in Final Cut Pro with M1 Pro and up to 11.5 times faster with M1 Max. There is also up to 2.6 times faster performance when selecting subjects in images in Adobe Photoshop. 

The MacBook Pro 2021 will come preinstalled with the MacOS Monterey and the OS will also be available to older devices running the MacOS BigSur. The MacBook Pro with the M1 Pro and M1 Max are true Pro machines and do provide the power needed by the creative professionals claims Apple. 

macbook pro 2021 notch Self Driving


The pricing for the 14-inch Apple MacBook Pro (2021) starts at Rs. 1,94,900 and is also available for education at Rs. 1,75,410. The 16-inch Apple MacBook Pro (2021) model starts at Rs. 2,15,910 for education and Rs. 2,39,900 for regular customers.

In the US, the 14-inch Apple MacBook Pro (2021) starts at $1,999 (roughly Rs. 1,50,400), while the 16-inch version is priced at $2,499 (roughly Rs. 1,88,100).

Get Top Deals on MacBooks and Accessories.

MacBook Pro 2020 8GB RAM 256GB SSD

MacBook Pro 2020 8GB Ram 512GB SSD

MacBook Air 2020 8GB Ram 256 GB SSD

TEQTA USB-C Thunderbolt Hub

Child Abuse Awareness

Apple now scans your device for child abusive material.

On 6th August, Apple revealed that it is now taking a stance against child abuse on its operating systems including iOS 15, macOS Monterey, iPadOS 15, and watchOS 8.

The anti CSAM features will be first appearing in three areas including Messages, iCloud, and Siri and Search.


The Messages app will now include tools that will warn children and their parents before sending and receiving sexually explicit images. If received such a message the child will be warned and the image blurred. The kid will be assured it is ok if they do not want to view the photo, and the parents of the child will be informed if the child chooses to see it. This is done using on-device intelligence hence making sure the user’s privacy is not violated. Similar protections are available if a kid attempts to send sexually explicit photos. The kid will be warned before the photo is sent, and the parents can receive a message if the child chooses to send it.

Siri and Search will now be able to guide and help children and parents stay safe online. The service will now also include more resources related to the matter. Users can now also ask Siri to help them report CSAM and they will be guided on filing a report.

Siri will also intervene when users try to perform a search related to CSAM, they will be informed that interest in the topic can be harmful and problematic, and provide resources to help with the issue.

iCloud and CSAM detection

The most prominent step in Apple’s stance is its CSAM detection system that will try matching your images against a list of knows CSAM image hashes provided by the US National Center for Missing and Exploited Children (NCMEC) and other children’s safety organizations before an image is stored in iCloud.

child safety icloud

“Before an image is stored in iCloud Photos, an on-device matching process is performed for that image against the known CSAM hashes. This matching process is powered by a cryptographic technology called private set intersection, which determines if there is a match without revealing the result, The device creates a cryptographic safety voucher that encodes the match result along with additional encrypted data about the image. This voucher is uploaded to iCloud Photos along with the image.” Apple said.

“Using another technology called threshold secret sharing, the system ensures the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content. The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account.”

Apple can not normally manually view the images until the image hash reaches a set threshold, when that happens Apple will manually look at the hash, vouchers, and all the metadata. If it is distinguished as CSAM Apple will disable the Account and a report will be sent to NCMEC.


Apple’s approach received criticism stating that the Cupertino giant is preparing to add a backdoor to its services. The fact that Apple can manually interpret the scanned images makes the feature controversial given that the company once declined the FBI when it was asked to create a backdoor for the FBI to access a terrorist’s iPhone.

Apple has some of the best privacy practices in the entire industry and we believe will not be using the technology to collect user data but for those that do not trust you can stop Apple from scanning your photos by not using iCloud Photos.

When will these features be implemented?

These new features are expected to go live with the release of Apple’s new iOS15, iPadOS15, macOS Monterey, and WatchOS later this year alongside Apple’s upcoming iPhone 13 and Macbook Pros.


Apple looking to lease a Hollywood studio campus to expand it’s Apple TV+ streaming service.

Apple Inc is reportedly looking for a Hollywood based movie production studio campus in order to establish itself as a major player in the entertainment business. 

The iPhone maker is looking at multiple possible locations across Los Angles, and this campus could exceed half a million square feet and would complement the company’s current arrangement where Apple leases soundstages to film in. 

The new Hollywood studio would play a major role in creating content for the Apple TV+ streaming service. The platform launched in 2019 is one of the smaller player in the streaming wars even though it has some vary well known shows such as Ted Lasso and The Morning Show. With this new expansion Apple is using big money moves to gain a much larger user base and set itself as a direct competitor to Netflix and Disney+. 

Apple is also making a large number of new deals acquiring multiple Movies and TV shows including Martin Scorsese’s ‘Killers of the Flower Moon’ starring Leonardo DiCaprio. And multiple other big name projects. The platform has some of the finest and highest rated content getting itself a curious spot in the entertainment industry as it still lacks the scale. This brings up the question whether Apple will acquire a movie and film studio to get a whole catalog of content similar to Amazon’s acquisition of MGM. 

Apple has now hired Mike Mossallam, who is a lead production real estate executive and has also director of production planning and studio leasing at Netflix. This increase the chances of Apple following Netflix in acquiring, building and purchasing multiple production campus along with leasing soundstages across the Hollywood and the world. 

Laptop Webcams are bad

Why Laptop Webcams are bad? | “They Suck”

Webcams are bad or are They? Actually they suck!

So Our assumption here was that laptop makers figured webcam, that’s an easy place to cut costs, but as it turns out that is only a small piece of the puzzle.

we got to come to this, although we’ve been one of the most vocal reviewers trying to get decent webcams into laptops, we expect that you’re just going to use your phone quality becomes much better in phones. The webcam falls considerably short of the surface laptop.

But first, to understand what makes a camera bad, we have to talk about what makes one good, starting with the sensor, this Sony FS6 features an enormous 35-millimetre wide full-frame sensor. And when it comes to cameras, size matters.

All other things being equal, the bigger the sensor, the lighter, it can collect on each pixel improving colour fidelity, and greatly improving performance in low light conditions. So problem number one is that due to space constraints webcams views, smooth purse small sensors, typically in the range of about two to five millimetres wide. As pressure from consumers, and reviewers mounted for the bezels on laptops to shrink. So did the webcams shrink.

we mean we were stoked when Dell announced refreshed XPS models with top-mounted webcams that didn’t look up the user’s nose, and we do still think that was a worthwhile trade-off but the results are not incredible, but this shrinking the bezel really make that much of a difference. Yes, actually, let me tell you. In the dragonfly max, HP actually increased the bezel size compared to the last model to accommodate a 3.63 millimetres

Now, that’s still only 2% of the size of the FS6 we just showed you, but it’s also a whopping five times the size of the 2.2-millimetre sensor in the skinny bezel. Dell’s this bigger sensor also allows more pixels to be packed in for 1440p recording, which delivers a noticeable improvement in sharpness, compared to your typical 720p webcam.

Another problem for laptop webcams is Windows Hello facial recognition. It is the best way to Sign-in in to Windows, hands down, but because lighting conditions might be dramatically different from one sign in to the next. It relies on built-in emitters that will illuminate your face using infrared light. Now, digital cameras are inherently sensitive to infrared, but most of them actually block it out on purpose because it causes issues with both autofocus and colour saturation. That’s why, earlier implementations of Windows Hello used a separate camera, dedicated to capturing IR. But then, in that same mad rush to shrink bezels. These have been combined n some models, and in spite of the software tuning that goes into these combined sensors, you can still get purple splotchiness and weird noise on the dragonfly max then HP decided combined sensors were a pain in the butt, they didn’t want to deal with. So they included a separate IR and video camera to get to the cleanest feed possible.

Well of course you can have the biggest and badest sensor in the world and if the glass in front of it is bad. Your picture quality is going to be bad. One of the roles that the lens plays than is to ensure that that tiny sensor gets as much light hitting it as possible. Now the aperture of a camera or, more accurately, a lens describes how big of an opening you have. So the bigger the opening, the more light you can get and since we already have problems with low light on such small sensors. Well then, we should make the aperture on the lens for your webcam as wide open as possible. Right? Well, yes, but also know the size of your aperture not only changes how much light is led in but also the depth of field So if we were to really open up the lens on the laptop, it would need to have exceptional autofocus to make sure that you don’t belong, blurry. Every time you shift around in your seat. Now, there are many ways to do autofocus, probably the best way is demonstrated in Sony’s Alpha lineup.

It uses parts of the sensor to detect the face of the incoming light and uses that to focus. Modern phones use similar tech, probably at least partly because Sony also supplies the sensors for Apple and Samsung with these super small microelectromechanical systems or memes that can super accurately focus the lenses.

As for how laptop webcams focus. Well, they don’t. There simply isn’t enough space. Like, think about the thickness of your phone. The laptop didn’t have that, then the cabling from the webcam, microphone, and Wi-Fi encounters that are usually back there, and you don’t have the space that you would need for multi-element lenses. So just a single focus point has to be picked. And that’s what you’re stuck with. Of course, you can get away with mediocre video just fine. If your audio quality is good. Now simple laptop microphones have one or sometimes two little holes, usually near the screen to capture your voice while better ones can use more.

So then, the bad news is that there are serious constraints to improving webcam quality in laptops. But the good news is that they can get better and some laptop manufacturers are beginning to take the problem seriously.

credit: Linus Tech Tips


GitHub and OpenAI awesome Project an AI Copilot tool that generates code for you

GitHub and OpenAI have launched a technical preview of a new AI tool called Copilot, which lives inside the Visual Studio Code editor and autocompletes code snippets.

AI-powered pair programmer that collaborates with people on their software development projects, suggesting lines or entire functions as the coder types.

GitHub Copilot integrates directly with Visual Studio Code. You can install it as an extension or use it in the cloud with GitHub Codespaces. Over time, the service should improve based on how you interact with GitHub Copilot. As you accept and reject suggestions, those suggestions should get better.

The new GitHub and Copilot feature also leans heavily on a collaboration with OpenAI, the AI research company that GitHub parent Microsoft invested $1 billion in last year (more info). Copilot, though, uses a new AI system called OpenAI Codex, which is touted as “significantly more capable than GPT-3 in code generation,” according to a GitHub blog on 29th June.

Github copilot

In order to figure out what you’re currently coding, GitHub and openai Copilot tries to parse the meaning of a comment, the name of the function you are writing, or the past couple of lines. The company shows a few demos on its website


It is worth noting that GitHub Copilot is not designed to write code on behalf of the developer; it’s more about helping developers by understanding their intent. GitHub also gives no guarantees that the code it generates will even work, as it doesn’t test the code. This means that it may not compile properly. So there are some risks, but it’s still very early days for Copilot.

More from Microsoft                                                                                            Windows 11 is real now!                                                                                            Run Android App in Windows

TikTok running on Windows 11

Microsoft surprised us by running Android Apps natively on Windows 11: Here’s how this exciting new feature works.

Microsoft recently launched Windows 11 but with a leaked copy already being out nothing at the launch event was really a surprise except Android App Support. Yeah, you read it right, You can now run Android apps natively on Windows.

Window s already has a very vast range of Apps for all types of stuff and this new feature gives users ability do a lot more from their Windows machine. Microsoft has partnered up with Intel to make use of Intel Bridge Technology that makes Android on Windows possible.

How Does all of this Work:

Android Apps on Windows are based on a something called Intel Bridge Technology. This is a runtime post-compiler that originally is a part of the company’s XPU strategy, and is not limited to android apps. This tech was developed to bring applications designed for various different hardware to the X86 platforms.

It is unclear how well would this work with AMD based devices however the Chip maker did say  “Intel believes it is important to provide this capability across all x86 platforms and has designed Intel Bridge technology to support all x86 platforms (including AMD platforms), However, Intel delivers platforms that result in an optimized experience making Windows platforms running on Intel Core processors the best choice.”

The Microsoft-Amazon app store.

Microsoft did team up with Amazon to bring the Amazon App Store to Windows and right into the Windows Store App. You can download all the Android apps you need directly from the store app.


Microsoft did a great work with Windows 11 and bringing Android apps to its own Operating System. The mobile app support for Windows on ARM and Windows machines with AMD chips is still a bit suspicious. How well these apps work is something we will find out once the beta versions of the system start rolling out.

More Windows Coverage

winget package manager

Windows now have its own Linux-like package manager and it is really good.

If you are comfortable with a Linux-based machine you probably know what a package manager is and you probably do miss it when using a Windows machine. 

For those who don’t know what a package manager is, well it is usually a command-line program that automates the process of downloading, installing, configuring, upgrading, or removing applications from your OS. 

Most Linux distros have a package manager built-in and it makes the process of installing apps super easy. 

On Windows, you had the option to install some third-party package managers like Chocolatey, but Microsoft has its own Windows Package Manager. 

Although this does not come out of the box with Windows, but is ready to be used. 

How to install the Windows Package Manager.

If you run a Windows Insider build you probably already have the Winget client installed. For those who don’t you can get the winget client with App Installer from Microsoft Store. Details about the process can be found in the documentation provided by Microsoft

How to find and install apps using the Windows Package Manager.

To use the WPM you need to remember a few super important commands that include 

winget search app_name
winget install app_name
winget upgrade app_name
winget uninstall app_name

These are pretty obvious and won’t take long to get used to. 

The winget search command searches the app you input in the Microsoft Community Repository and shows you related results. 

You can then type in the winget install app_name command to install the required app.

The winget upgrade command is used to upgrade to the latest version of a specific package while winget upgrade—all can be used to upgrade all the packages.

The winget uninstall app_name as the command suggests it uninstalls the specific app. 


This is just the beginning of what the windows package manager could do. We also expect Microsoft to bring a GUI for the tool which is currently command-line only and to also ship it out of the box with the upcoming Windows 11 operating system.

Also Read