• Welcome to the Online Discussion Groups, Guest.

    Please introduce yourself here. We'd love to hear from you!

    If you are a CompTIA member you can find your regional community here and get posting.

    This notification is dismissable and will disappear once you've made a couple of posts.
  • We will be shutting down for a brief period of time on 9/24 at around 8 AM CST to perform necessary software updates and maintenance; please plan accordingly!

The One [Simple] Method AI Implementers Use For Success

Who do you blame when AI projects fail? The technology? Your machine learning and data science team? Vendors? The data? Certainly you can put blame on solving the wrong problem with AI, or applying AI when you don’t need AI at all. But what happens when you have a very well-suited application for AI and the project still fails? Sometimes it comes down to a simple approach: don’t take so long.
At a recent Enterprise Data & AI event, a presenter shared that their AI projects take on average 18 to 24 months to go from concept to production. This is just way too long. There are many reasons why AI projects fail and one common reason is that your project is taking too long to go into production. AI projects shouldn’t be taking 18 or 24 months to go from pilot to production. Advocates of best-practices agile methodologies would tell you that’s the old-school “waterfall” way of doing things that’s ripe for all sorts of problems.
Yet, despite the desire to be “agile” with short, iterative sprints of AI projects, organizations often struggle to get their AI projects off the ground. They simply don’t know how to do short, iterative AI projects. This is because many organizations are running their AI projects as if they were research-style “proofs-of-concept”. When companies start with a proof of concept (POC) project, over a pilot, it sets them up for failure. Proof of concepts often lead to failures because they don’t aim to solve a problem in the real world, but rather focus on testing an idea using idealistic or simplistic data in a non-real world environment. As a result, these organizations are working with data that isn’t representative of the real world data, with users who aren’t heavily invested in the project, and potentially not working in systems where the model will actually live. Those who are successful with AI projects have one simple piece of advice: ditch the proof-of-concept.
AI Pilots vs. Proof of Concepts
A proof-of-concept is a project that is a trial or test run to illustrate if something is even possible and to prove your technology works. Proof of concepts (POCs) are run in very specific, controlled, limited environments instead of in real world environments and data. This is much the way that AI has been developed in research environments. Coincidentally, many AI project owners, data scientists, ML engineers and others come out of that research environment they are very comfortable and familiar with.
Continue reading: https://www.forbes.com/sites/cognitiveworld/2022/09/04/the-one-simple-method-ai-implementers-use-for-success/?sh=387d48823382

Attachments

  • p0008900.m08488.forbes.jpg
    p0008900.m08488.forbes.jpg
    2.8 KB · Views: 35
  • Like
Reactions: Brianna White

Why Do We Keep Repeating The Same Mistakes On AI?

Artificial intelligence has a long and rich history stretching over seven decades. What’s interesting is that AI predates even modern computers, with research on intelligent machines being some of the starting points for how we came up with digital computing in the first place. Early computing pioneer Alan Turing was also an early AI pioneer, developing ideas in the late 1940s and 1950s. Norbert Wiener, creator of cybernetics concepts developed the first autonomous robots in the 1940s when even transistors didn’t exist, let alone big data or the Cloud. Claud Shannon developed hardware mice that could solve mazes without needing any deep learning neural networks. W. Grey Walter famously built two autonomous cybernetic tortoises that could navigate the world around them and even find their way back to their charging spot in the late 1940s without a single line of Python being coded. It was only after these developments and the subsequent coining of the term “AI” at a Dartmouth convention in 1956 that digital computing really became a thing.
So given all that, with all our amazing computing power, limitless Internet and data, and Cloud computing, we surely should have achieved the dreams of AI researchers that had us orbiting planets with autonomous robots and intelligent machines envisioned in 2001: A Space Odyssey, Star Wars, Star Trek, and other science fiction that was developed in the 1960s and 1970s. And yet today, our chatbots are not that much smarter than the ones developed in the 1960s, our image recognition systems are satisfactory but still can’t recognize the Elephant in the Room. Are we really achieving AI or are we falling into the same traps over and over? If AI has been around for decades now, then why are still seeing so many challenges with its adoption? And, why do we keep repeating the same mistakes from the past?
AI Sets its First Trap: The First AI winter
In order to better understand where we currently are with AI, you need to understand how we got here. The first major wave of AI interest and investment occurred from the early 1950s through the early 1970s. Much of the early AI research and development stemmed from the burgeoning fields of computer science, neuropsychology, brain science, linguistics, and other related areas. AI research built upon exponential improvements in computing technology. This combined with funding from government, academic, and military sources produced some of the earliest and most impressive advancements in AI. Yet, while progress around computing technology continued to mature and progress, the AI innovations developed during this window ground to a near halt in the mid 1970s. The funders of AI realized they weren’t achieving what was expected or promised for intelligent systems, and it felt like AI is a goal that would never be achieved. This period of decline in interest, funding, and research is known in the industry as the first AI Winter, so called because of the chill that researchers felt from investors, governments, universities, and potential customers.
Continue reading: https://www.forbes.com/sites/cognitiveworld/2022/09/03/why-do-we-keep-repeating-the-same-mistakes-on-ai/?sh=bf50d461475c

Attachments

  • p0008899.m08487.forbes.jpg
    p0008899.m08487.forbes.jpg
    2.8 KB · Views: 38
  • Like
Reactions: Brianna White

AI Is For Human Empowerment: So Why Are We Cutting Humans Out?

Almost every company understands the value that artificial intelligence (AI) or machine learning (ML) can bring to their business, but for many, the potential risks of adding AI do not outweigh the benefits. Report after report consistently ranks AI as critically important to C-suite executives. To remain competitive means streamlining processes, increasing efficiency and improving outcomes, all of which can be achieved through AI and ML decisioning.
Despite the value that AI and ML bring, a lack of trust or fear that the technology will open businesses to more risk has slowed the implementation of AI/ML decisioning. This isn’t wholly unfounded—the risk of biased decisions in highly regulated industries and applications, like insurance eligibility, mortgage lending or talent acquisition, has been the subject of several new laws focused on the “right to explainability.” Earlier this year, congress proposed the Algorithmic Accountability Act. Overseas, the European Union is pushing for stricter AI regulations abroad, as well. These laws, and the “right to explainability” movement in general, are a reaction to mistrust of AI/ML decisions.
In fact, ethical worries around AI and ML impede the use of AI/ML decisioning. Research from Forrester commissioned by InRule discovered that AI/ML leaders are fearful that bias could negatively impact their bottom line.
To solve this problem, businesses must rethink their goals for AI/ML decisioning. For too long, many outside of the AI/ML field have seen technology as the replacement for human intelligence instead of the amplification of it. In removing humans from the decision-making loop, we are increasing the chance of bias and inaccurate and potentially costly decisions.
Continue reading: https://www.forbes.com/sites/forbestechcouncil/2022/09/02/ai-is-for-human-empowerment-so-why-are-we-cutting-humans-out/?sh=16d1599b4400

Attachments

  • p0008898.m08486.960x0_2022_09_06t094325_589.jpg
    p0008898.m08486.960x0_2022_09_06t094325_589.jpg
    79.1 KB · Views: 41
  • Like
Reactions: Brianna White

How IoT Solutions Are Taking Tech Into The Future

The term ‘Internet of Things’ (IoT) has become more widely known – and one that has found unique applicability in not only our homes, but in our businesses, workplaces, and cities. The number of installed IoT devices is expected to surge to around 30.9 billion units by 2025.
They are vital tools for digital transformation and datafication – and their power lies in performance improvements, as well as problem-solving capabilities. IoT’s importance as a technology trend this year and into the future is the role it plays in the successes of other technologies. For manufacturers looking to drive evolution, keeping a finger on the pulse of the latest IoT trends is important for agility into 2023 and beyond.
●There are more than 7 billion connected IoT devices currently in operation.
●By 2030, 75% of all devices are expected to be IoT.
●Worldwide IoT spending is anticipated to reach $1 trillion and this growth rate is predicted to continue in 2023 and beyond.
A Quick View of the Benefits of IoT in Business
●IoT solutions help to build resilient supply chains
●Improved health, wellbeing, safety, and security
●Optimized asset usage and maintenance
●Reduced overheads
●Improved communication and engagement
●Meaningful sustainability and environmental advances
Technology Trends Powered By IoT
1. Remote Monitoring
According to McKinsey, the COVID-19 pandemic accelerated the adoption of digital technologies by seven years. Off the back of global lockdowns, this naturally includes the requirement for remote monitoring and the move towards automated systems. These IoT-based technologies are being adopted to transform everything from building monitoring and machine performance to building occupancy and machine learning.
Continue reading: https://www.emsnow.com/how-iot-solutions-are-taking-tech-into-the-future/

Attachments

  • p0008897.m08485.adobestock_204106702_1.jpeg
    p0008897.m08485.adobestock_204106702_1.jpeg
    121.5 KB · Views: 42
  • Like
Reactions: Brianna White

The Fundamentals: How to Analyze Cryptocurrency

Cryptocurrency investing can be a great way to diversify investments but figuring out which cryptocurrency is suitable and which isn't can be challenging. In order to make an informed investment decision, it is important to know how to analyze cryptocurrencies.
1. Review the White Paper
Most crypto projects provide a white paper, which helps define the objectives and technical details of the cryptocurrency. While some white papers may contain technical jargon not understood by casual investors, it is important to read through the paper to learn about the vision of a project.
The white paper should clearly define the goals of the project, how their technology will achieve those goals, as well as how the cryptocurrency will function. Most white papers define a problem that is being solved with the advent of the currency itself, and this problem and solution should be crystal clear to investors.1
A red flag on any crypto project would be a white paper full of generic promises with no details. 
2. Research the Team
Cryptocurrencies are typically created by a team of founders and software developers that help create the solution to a problem. To better understand how a project could perform, you should research the professional experience of the team running the project. 
This may include reviewing the LinkedIn profile to learn about the professional background of any of the technical or leadership staff, as well as learning about the previous projects that the team members have launched. Also, the “About” page on any cryptocurrency’s website should clearly articulate who is helping build the project, and what their expertise entails.
Continue reading: https://www.investopedia.com/analyze-crypto-6456223

Attachments

  • p0008896.m08484.investopedia.jpg
    p0008896.m08484.investopedia.jpg
    5 KB · Views: 45
  • Like
Reactions: Brianna White

Blockchain, AI, and the metaverse — tools for better decision-making?

When it comes to working together, people need help — whether they're in a conference room or in the metaverse. Could technology such as blockchain and AI make a difference?
Last week, I was on a panel for the World Talent Economy Forum to talk about building consensus in the metaverse. While this virtual world has potential, it’s nowhere near ready to replace tools like Zoom or in-person meetings. In fact, it falls short of today’s alternatives in an important way: because it relies on avatars, there’s a higher risk of stolen identities and fraud. 
Generally, when building consensus, people like to look each other in the eye and use body language and personality to get others to agree. That’s true on video calls and in person. Even then, the loudest (or best connected) person in a meeting — not the most knowledgeable or capable — wins the day.
But what if a combination of blockchain (for security) and artificial intelligence (for better decisions) could be added to the mix? Then the metaverse — or even Zoom calls — could be transformed into far more effective tools, more effective even than meeting in person.
Let me explain.
Continue reading: https://www.computerworld.com/article/3672428/blockchain-ai-and-the-metaverse-tools-for-better-decision-making.html

Attachments

  • p0008895.m08483.blockchain_ai.jpg
    p0008895.m08483.blockchain_ai.jpg
    81.8 KB · Views: 51
  • Like
Reactions: Brianna White

Blockchain firms fund university research hubs to advance growth

Universities implement physical and virtual research hubs dedicated to advancing blockchain technology through scientific and educational knowledge.
The demand for organizations to adopt blockchain technology is growing rapidly. Recent findings from market research and advisory firm Custom Market Insights found that the global blockchain technology market size was valued at $4.8 billion in 2021, yet this amount is expected to reach $69 billion by 2030. While notable, it’s become critical for the industry to enable rigorous research into the development of the blockchain sector. 
Tim Harrison, vice president of community and ecosystem at Input Output Global (IOG) — the developer arm behind the Cardano blockchain — told Cointelegraph that during the past year, the blockchain ecosystem has witnessed various risks from projects that have taken a “go fast and break things” approach.
“Not only do these companies run these risks for themselves, but mistakes and failures can also negatively impact their end consumers,” he said. As such, Harrison believes that peer-reviewed research can help prevent such situations while also resolving issues that continue to linger from earlier iterations of blockchain development.
Companies fund university-led research hubs
In order to ensure that blockchain projects are thoroughly researched moving forward, Harrison noted that IOG recently funded a $4.5 million Blockchain Research Hub at Stanford University. According to Harrison, the hub’s goal is to enrich the body of scientific knowledge within the blockchain and distributed ledger industry while driving a greater focus on fundamental research. 
Although the Blockchain Research Hub at Stanford was just announced on August 29, 2022, Aggelos Kiayias, chief scientist at IOG and a professor at the University of Edinburgh, told Cointelegraph that he believes the center will help the industry collectively solve current challenges.
Continue reading: https://cointelegraph.com/news/blockchain-firms-fund-university-research-hubs-to-advance-growth

Attachments

  • p0008894.m08482.1434_ahr0chm6ly9zmy5jb2ludgvszwdyyxbolmnvbs91cgxvywrzlziwmjitmdgvzguxztc3mjato...jpg
    p0008894.m08482.1434_ahr0chm6ly9zmy5jb2ludgvszwdyyxbolmnvbs91cgxvywrzlziwmjitmdgvzguxztc3mjato...jpg
    531.1 KB · Views: 50
  • Like
Reactions: Brianna White

Women Behind the Screen: An Interview with BlackBerry Threat Research & Intelligence Pros

September 1st is International Women in Cyber Day – a global movement dedicated to advancing and retaining women in the cybersecurity industry. To mark the occasion this year, the BlackBerry Blog caught up with two accomplished professionals on our Threat Research and Intelligence Team: Principal Threat Researcher Lysa Myers and Principal Threat Research Publisher Natasha Rohner. Read on to learn about their journey into cyber and insights they’ve gained along the way.
Q: What can you tell us about your journey into cybersecurity?
Lysa Myers: My journey into security wasn’t exactly conventional. I had always pictured myself with a job focusing on plants – one particular taxonomy class was an “Aha!” moment for me, understanding the family relationships between individual plant species. One summer, I took a job as an office assistant at a security company, which had some unexpected downtime, so I volunteered in the virus research group.
Having a lot of customer service experience in my previous career as a florist, I was a natural fit to help with triage for incoming malware samples. As I learned more about malware research, that taxonomy experience came in handy – being able to spot important similarities and differences between individual variants helped me to add detection for malware families.
Natasha Rohner: I also had what you might call a non-traditional journey into cybersecurity. After graduating with a degree in film production, my first job was as a work-for-hire writer for gaming giant Games Workshop, which had just launched a movie-based, fiction publishing wing. They’d acquired the rights to film franchises such as Blade, Final Destination, and the Freddie Krueger movies, which (to my delight, as a huge science-fiction fan) they hired me to turn into novels.
Continue reading: https://blogs.blackberry.com/en/2022/09/women-behind-the-screen-an-interview-with-blackberry-threat-research-pros

Attachments

  • p0008893.m08481.iwcd_875x530_hdr_opt5.png
    p0008893.m08481.iwcd_875x530_hdr_opt5.png
    533.1 KB · Views: 63

Artificial intelligence is getting even smarter

Digital marketers still have a job of course. But it is not going to be quite the same job, as artificial intelligence begins its “second act”.

Yes, AI is still good at compiling, sorting and categorizing massive amounts of data. Only now it’s increasingly able to assist in creating content in ways it could not before.
All you need to do is give an AI app a specific input. Render for me an image of a 1920s gangster taking a selfie with a smartphone. That technology did not exist 100 years ago, but the image does look convincing.
AI has evolved, but how does one now use it? Do you replace people and automate their work? Put the AI in charge? Or have the AI assist the human?
Necessity is the mother of improvisation
There’s long been the promise of using AI to create visual and written content. “One year ago, it couldn’t do that. It seemed like it was always ‘getting there’,” said Adam Binder, owner, founder and CEO of digital marketing and SEO firm Creative Click Media.
Things are different now, a reality Binder encountered when necessity called. “I was trying to make a deadline when my human writer was not there.” he said. Turning to AI, Binder made all the necessary inputs to get the app to do a bit of copywriting.
The app he used was GPT-3 by OpenAI, a nonprofit AI research and deployment company. It uses existing copy and top search picks from Google as raw material to generate its output. “The writing tool created a tone of voice,” Binder said. “It’s scary how similar it was [to the writer’s copy].”
Still, “AI is a long way from replacing the writer,” Binder said. “It can’t come up with a thesis.” It can’t compare and contrast, and it isn’t generating copy at the same level as an op-ed piece in the New York Times, he added.
Google put down its marker, declaring that AI-generated content violates its webmaster policy . However, Google may not have the means of detecting such copy. “AI writing is perfect,” said Chris Carr, president and CEO of digital agency Farotech. “Humans make mistakes.”
Continue reading: https://martech.org/artificial-intelligence-is-getting-even-smarter/

Attachments

  • p0008892.m08480.martech.jpg
    p0008892.m08480.martech.jpg
    4 KB · Views: 49

The Power of AI Coding Assistance

Until recently, coding involved repetitive tasks, and required knowledge of many minute details. These aspects of coding detracted from the truly creative work that developers enjoy, and they slowed developers down.
Now, artificial intelligence technology promises to eliminate much of that repetitive work, and developers are no longer thrown off task by having to search the web for those minute details.
The technology works similarly to auto-complete in word processing but writing code instead of plain language and completing whole functions at a time.
AI Helps with Algorithms, Boilerplate Code
Among the latest offerings in AI-powered is Github's Copilot, an AI-powered pair programmer tool available to all developers for $10 a month or $100 per year.
The company claims Copilot can suggest complete methods, boilerplate code, whole unit tests, and even complex algorithms.
“With AI-powered coding technology like Copilot, developers can work as before, but with greater speed and satisfaction, so it’s really easy to introduce,” explains Oege De Moor, vice president of GitHub Next. “It does help to be explicit in your instructions to the AI.”
He explains that during the Copilot technical preview, GitHub heard from users that they were writing better and more precise explanations in code comments because the AI gives them better suggestions.
“Users also write more tests because Copilot encourages developers to focus on the creative part of crafting good tests,” De Moor explains. “So, these users feel they write better code, hand in hand with Copilot.”
He adds that it is, of course, important that users are made aware of the limitations of the technology.
Continue reading: https://www.informationweek.com/software/the-power-of-ai-coding-assistance

Attachments

  • p0008891.m08479.ai_coding.jpg
    p0008891.m08479.ai_coding.jpg
    51 KB · Views: 45
  • Like
Reactions: Brianna White

How To Prepare Faster For Looming AI Regulation: Turn Defense Into Offense

If you're a business leader, you're likely trying to get more from artificial intelligence (AI). You also might be wary of the inherent risk and bias that AI may introduce and even wonder if AI will become regulated. Indeed, employing AI is risky, regulation is coming and you may need to shift your way of thinking about technology to use it effectively.
As in sports, defense can be used to spark offense. With AI, the same elements of an effective culture of compliance can also spark innovation, collaboration and agility with data science.
Human-In-The-Loop For AI
In April 2021, the European Commission issued a proposal for AI regulation, the Artificial Intelligence Act (AIA). It's the first attempt to provide a legal framework for AI and regulate corporate responsibility and fairness for AI-infused systems. The EU's ethics guidelines for trustworthy AI advise that "AI systems should empower human beings, allowing them to make informed decisions and foster fundamental rights. At the same time, proper oversight is needed through human-in-the-loop, human-on-the-loop, and human-in-command approaches."
The first and most important player in the AI game is the human. Since humans create algorithms, and humans are biased, AI inherits that bias.
The bad news, as Nobel Prize-winning psychologist Daniel Kahneman says in Noise, is that humans are unable to detect their own biases. The good news, Kahneman suggests, is that there's a simple way to identify and mitigate bias—have someone else identify it. He calls these people "decision observers."
Continue reading: https://www.forbes.com/sites/forbestechcouncil/2022/09/01/how-to-prepare-faster-for-looming-ai-regulation-turn-defense-into-offense/?sh=49663cf6110b

Attachments

  • p0008890.m08478.960x0_2022_09_02t090959_287.jpg
    p0008890.m08478.960x0_2022_09_02t090959_287.jpg
    75.4 KB · Views: 34
  • Like
Reactions: Brianna White

Where Are All The Female CTOs?

Earlier this year, I started receiving random messages on LinkedIn congratulating me for being placed on Sifted’s list of 130-plus female CTOs in Europe. Apparently, this was the first time a list like this had been compiled. In fact, up until then, it did not even occur to me how uncommon it was to be a female CTO. Soon, I shifted from feeling flattered to being outraged. Why are there so few female CTOs in 2022? And what can we do to change that?
An Unconventional Journey To Becoming A CTO
I never thought that I would become a CTO. After graduating with a Master’s in Technology, Innovation and Education from the Harvard Graduate School of Education, I gravitated toward edtech startups in the U.S. and Europe. I took on roles as an instructional designer and product manager to design and develop software products for learning. After working with software development teams for several years, I decided that I needed a deeper understanding of the technology itself and learned to code.
In 2019, I joined SwipeGuide, an Industry 4.0 startup for work instructions based in Amsterdam, as a software engineer. The company was hit badly by the Covid pandemic in 2020, and it needed to make some difficult decisions. There was a significant turnover, and the former CTO left the company. Instead of recruiting someone for the position outside of the company, our CEO decided to promote me internally.
The first year was like drinking from a fire hose. Of course, I was nervous. But I started to get my hands wet and just tackle the problems one by one, from streamlining processes in the product team, recruiting and hiring to scaling and securing the infrastructure, solidifying data privacy practices and API fair use policies and so much more. I’ve never stopped learning.
Continue reading: https://www.forbes.com/sites/forbestechcouncil/2022/08/31/where-are-all-the-female-ctos/?sh=51e806bd701f

Attachments

  • p0008888.m08477.960x0_2022_09_01t105541_503.jpg
    p0008888.m08477.960x0_2022_09_01t105541_503.jpg
    34.3 KB · Views: 64
  • Like
Reactions: Brianna White

Can Cryptocurrency Be Used as Collateral for Business Loans?

  • Cryptocurrency (or crypto currency) has been increasingly popular for investors and even as an accepted payment at businesses around the world. 
  • As it gains popularity, cryptolenders are becoming more common as an alternative source of small business and personal lending. 
  • Find out more about cryptolending and the rise of digital currencies from Nav’s small business experts. 
What is Cryptolending? 
Cryptolending (or crypto lending) is the process of using crypto currency, such as Bitcoin (BTC), as collateral, as you would with a secured loan. It’s a decentralized finance (or DeFi) service that uses the blockchain to lend crypto assets to borrowers and then get crypto interest. For a cryptolender, it can be compared to opening a high-yield savings bank account, where you earn interest on the money in the account, but using crypto currency instead. 
It sounds straightforward and like a great deal, but because digital currencies are still new and the crypto market isn’t necessarily always stable, it’s not common for traditional lenders to participate in cryptolending yet. However, cryptolending is quickly becoming one of the most popular DeFi services on cryptocurrency platforms and exchanges. 
How Does Cryptolending Work?
With cryptolending, lenders and borrowers use a crypto platform or exchange as a lending marketplace. Both will sign up to the platform using their digital wallets. To engage with cryptolending, a cryptolender will move their cryptocurrency from their digital wallet, or crypto wallet, into a high-interest lending account on the platform. Borrowers can then apply for cryptoloans through the platform, which will approve the borrower and set interest rates and fees. The loan will be paid for using funds from the cryptolenders’ accounts. As the borrower repays the loan through monthly payments, the cryptolender and the platform will collect the interest. 
Every platform will have its own interest rates and fees. Lenders may get a higher annual percentage yield (APY) if they’re willing to keep their cryptocurrency locked into the account for a certain amount of time without making withdrawals, giving the platform more access to the funds for lending purposes. 
There are also automated methods for cryptolending. In this scenario, borrowers and lenders simply connect their digital wallets to a centralized lending protocol which handles the approvals and transfers based on certain conditions being met. These conditions are called smart contracts, and they’re made up of code running on blockchain networks that automatically determine when a loan can be approved.
Continue reading: https://www.nav.com/blog/crypto-collateral-1498790/

Attachments

  • p0008887.m08476.business_man_stock_exchange_trader_broker_looking_at_pc_computer_picture_id128...jpg
    p0008887.m08476.business_man_stock_exchange_trader_broker_looking_at_pc_computer_picture_id128...jpg
    47.3 KB · Views: 48
  • Like
Reactions: Brianna White

What Is the Blockchain Trilemma?

Here’s why the triad of security, scalability and decentralization is so hard to achieve.
One of the problems that the blockchain industry has been facing for a long time is the blockchain trilemma. The blockchain trilemma (also called the scalability trilemma) is the belief that decentralized platforms can only accomplish two out of the following three goals—security, scalability and decentralization—at a time. 
The term was first coined by Ethereum founder Vitalik Buterin. He said that when developers are creating blockchains, they end up sacrificing one of the three goals to achieve the other two. This trilemma is so deeply entrenched into blockchain technology that even the top two cryptocurrencies by market capitalization, Bitcoin and Ethereum, have not been able to achieve all three goals. Let’s make sense of the blockchain trilemma by breaking down each of the three goals, the challenges they present and how the blockchain community is tackling them.
The three goals blockchains need to achieve
Decentralization
The main idea behind cryptocurrency was to facilitate transactions without a central authority, that is, to have a decentralized network. In the interest of decentralization, that information on public blockchains is stored on a wide network of nodes (computers of those using the blockchain) across different locations. This means that anyone can read and write on the blockchain. The presence of a large number of nodes makes it nearly impossible to attack a public blockchain since transactions can be traced back to individual nodes. 
However, the presence of a lot of nodes (and consequently, a high number of users) also slows down the number of transactions per second processed on the blockchain. 
Scalability
The slow speed of the public blockchain leads us to the second goal, scalability. To become more useful and practical on a large scale, blockchains need to be capable of processing a myriad of transactions quickly without charging a steep fee for them. Yet, public blockchains are not very scalable as of yet due to low transaction speed. 
Continue reading: https://www.jumpstartmag.com/what-is-the-blockchain-trilemma/

Attachments

  • p0008886.m08475.what_is_the_blockchain_trilemma.jpg
    p0008886.m08475.what_is_the_blockchain_trilemma.jpg
    19 KB · Views: 56
  • Like
Reactions: Brianna White

Blockchain: where does fintech go from here?

Recent analysis shows the seismic growth within the blockchain economy, with 2021 its annus mirabilis. What is the key now to increasing blockchain uptake?
The increasing adoption of blockchain and the advance towards Web3 mean that the practical applications of blockchain are becoming more and more exciting. In fact, according to CB Insights’ State of Blockchain report, funding in blockchain companies grew by 713% year-over-year in 2021 to reach more than US$25bn. At the same time, funding in NFTs soared by almost 13,000% (yes – thirteen thousand) to US$4.8bn.
It’s no surprise that the New York-based market research company called it “a breakthrough year” for the blockchain economy, with venture funding to blockchain startups hitting new record highs and the volume of blockchain deals growing by 88% to reach 1,247.
Global blockchain market experiencing seismic growth
In 2022, experts predict that slightly more than half (51%) of global blockchain funding will come from the US, with expected market growth from US$7.18bn to US$67.4bn by 2026. Some of the primary factors driving this rapid expansion of the blockchain market include an increase in venture capital funding and investments; adoption of blockchain technology in cybersecurity applications; easy access to smart contracts and digital identities thanks to the widespread use of blockchain technology; and conscious efforts from governments.
The US has the largest blockchain market share. The region's early adoption of blockchain and the fact that various manufacturers offer security have provided fuel to the fire.
Continue reading: https://fintechmagazine.com/articles/blockchain-where-does-fintech-go-from-here

Attachments

  • p0008885.m08474.fintech.jpg
    p0008885.m08474.fintech.jpg
    83.6 KB · Views: 53
  • Like
Reactions: Brianna White

Filling The Web3 Regulatory Void Is Up To Businesses

Web3 offers the potential of a huge and lucrative new horizon for entrepreneurs and businesses of all sizes—but along with this comes the risk of the unknown and unregulated business practices. Can businesses and entrepreneurs afford to wait until regulation catches up?
Top businesses have already begun establishing their presence on the blockchain, so the answer appears to be “no.” In fact, it’s likely that businesses and entrepreneurs who are already blazing the trail to Web3 will also show us the way toward a more trustworthy and, yes, regulated blockchain. Getting consumers to take the leap to Web3 will depend on convincing consumers that they can trust it.
On the one hand, proponents of the decentralized web claim that its decentralization is what makes it more secure. On the other hand, news of bad actors siphoning millions of dollars from blockchain wallets tells us otherwise.
So, how do we get from the chaos of an unregulated new frontier to the relative order of a regulated and trusted Web3?
Continue reading: https://www.forbes.com/sites/forbesbusinesscouncil/2022/08/30/filling-the-web3-regulatory-void-is-up-to-businesses/?sh=1319cc616949

Attachments

  • p0008884.m08473.960x0_2022_09_01t104737_753.jpg
    p0008884.m08473.960x0_2022_09_01t104737_753.jpg
    48.9 KB · Views: 45
  • Like
Reactions: Brianna White

Soaking Up the Sun with Artificial Intelligence

The sun continuously transmits trillions of watts of energy to the Earth. It will be doing so for billions more years. Yet, we have only just begun tapping into that abundant, renewable source of energy at affordable cost.
Solar absorbers are a material used to convert this energy into heat or electricity. Maria Chan, a scientist in the U.S. Department of Energy’s (DOE) Argonne National Laboratory, has developed a machine learning method for screening many thousands of compounds as solar absorbers. Her co-author on this project was Arun Mannodi-Kanakkithodi, a former Argonne postdoc who is now an assistant professor at Purdue University.
“We are truly in a new era of applying AI and high-performance computing to materials discovery.” — Maria Chan, scientist, Center for Nanoscale Materials
“According to a recent DOE study, by 2035, solar energy could power 40% of the nation’s electricity,” said Chan. “And it could help with decarbonizing the grid and provide many new jobs.”
Chan and Mannodi-Kanakkithodi are betting that machine learning will play a vital role in realizing that lofty goal. A form of artificial intelligence (AI), machine learning uses a combination of large data sets and algorithms to imitate the way that humans learn. It learns from training with sample data and past experience to make ever better predictions.
In the days of Thomas Edison, scientists discovered new materials by the laborious process of trial and error with many different candidates until one works. Over the last several decades, they have also relied on labor-intensive calculations requiring as long as a thousand hours to predict a material’s properties. Now, they can shortcut both discovery processes by calling upon machine learning.
At present, the primary absorber in solar cells is either silicon or cadmium telluride. Such cells are now commonplace. But they remain fairly expensive and energy intensive to manufacture.
The team used their machine learning method to assess the solar energy properties of a class of material called halide perovskites. Over the past decade, many researchers have been studying perovskites because of their remarkable efficiency in converting sunlight to electricity. They also offer the prospect of much lower cost and energy input for material preparation and cell building.
Continue reading: 
https://www.newswise.com/doescience/soaking-up-the-sun-with-artificial-intelligence/?article_id=777556

Attachments

  • p0008883.m08472.soaking_up_the_sun.jpg
    p0008883.m08472.soaking_up_the_sun.jpg
    48.6 KB · Views: 38
  • Like
Reactions: Brianna White

Rise of AI in Business GRC

The global governance, risk, and regulatory (GRC) landscape is rapidly evolving. Businesses are under scrutiny from regulators now insisting on GRC disclosures and institutional investors who have started incorporating ESG among investment criteria. With this, Boards and leadership teams have started acknowledging the need for better governance, risk, and compliance (GRC) strategies to not just counter risk, but ensure business continuity.  
The past two years' events have also highlighted the need for better rapid response capability to risk. As per OCEG, a global, non-profit think tank on GRC, 70% of organizations in a recent survey reported new GRC challenges from having employees working remotely. Also, 60% of organizations reported that increased data privacy and cybersecurity regulations drove significant changes to their approach to GRC. Business leaders have got the idea that to thrive in an accelerating digital economy, they need to change their approach to GRC to become more resilient, risk-aware, and better-governed enterprises. 
Businesses in India have mostly followed a traditional approach to managing governance, risk, and compliance. That is, by addressing them as separate silos. For example, each department may have its risk reporting structures with little contact with other departments within the organization. However, as risks become more intertwined, the processes used to manage them often contradict each other. This leads to duplication of work, an increase in costs, and an overall increase in business risk. An Artificial Intelligence (AI)-enabled integrated approach to business GRC is the key to bringing everything together.  
Continue reading: https://indiaai.gov.in/article/rise-of-ai-in-business-grc

Attachments

  • p0008882.m08471.grc.jpg
    p0008882.m08471.grc.jpg
    47.2 KB · Views: 44
  • Like
Reactions: Brianna White

In The Face Of Recession, Investing In AI Is A Smarter Strategy Than Ever

As experts continue to debate whether a true recession is on its way, two things are certain: The economy is contracting, and borrowing is more expensive than it has been in over a decade. As a result, many businesses are becoming more cash-conservative, slowing their growth plans and battening down the hatches for what might be a challenging handful of years.
But as belts are tightening, business leaders still need to consider continued investment in areas that can improve business operations. As they evaluate how to optimize under financial pressure, businesses must weigh where their investment dollars will give them the best and quickest ROI.
And when it comes to bang for their buck, leaders are coming back time and time again to investments in technology. Tech leaders aren’t pulling back from technological investments at all, even in the face of a possible recession. Wisely, these leaders understand more than ever before that technology isn’t a cost center—it’s a business driver.
But not all technological investments are created equal, even when it comes to automation. Here are some factors finance leaders should consider when weighing where to allocate their increasingly-precious investment dollars.
Approach RPA With Caution
Automation is near the top of business leaders’ list, beaten out only by cybersecurity and analytics, according to a Bain & Company survey of 180 IT decision makers across North America and Europe. Forty-one percent of these respondents cited “building automation capabilities within business lines” as one of their most critical IT priorities.
Indeed, on its face, automation poses several cost-saving benefits, from reducing overhead to minimizing costly and time-consuming human error. But these solutions don’t always accomplish what they promise, especially when they involve robotic process automation or RPA.
Continue reading: https://www.forbes.com/sites/forbestechcouncil/2022/09/01/in-the-face-of-recession-investing-in-ai-is-a-smarter-strategy-than-ever/?sh=2f9bf4733e53

Attachments

  • p0008881.m08470.960x0_2022_09_01t095308_326.jpg
    p0008881.m08470.960x0_2022_09_01t095308_326.jpg
    33 KB · Views: 36
  • Like
Reactions: Brianna White

When — and Why — You Should Explain How Your AI Works

“With the amount of data today, we know there is no way we as human beings can process it all…The only technique we know that can harvest insight from the data, is artificial intelligence,” IBM CEO Arvind Krishna recently told the Wall Street Journal.
The insights to which Krishna is referring are patterns in the data that can help companies make predictions, whether that’s the likelihood of someone defaulting on a mortgage, the probability of developing diabetes within the next two years, or whether a job candidate is a good fit. More specifically, AI identifies mathematical patterns found in thousands of variables and the relations among those variables. These patterns can be so complex that they can defy human understanding.
This can create a problem: While we understand the variables we put into the AI (mortgage applications, medical histories, resumes) and understand the outputs (approved for the loan, has diabetes, worthy of an interview), we might not understand what’s going on between the inputs and the outputs. The AI can be a “black box,” which often renders us unable to answer crucial questions about the operations of the “machine”: Is it making reliable predictions? Is it making those predictions on solid or justified grounds? Will we know how to fix it if it breaks? Or more generally: can we trust a tool whose operations we don’t understand, particularly when the stakes are high?
To the minds of many, the need to answer these questions leads to the demand for explainable AI: in short, AI whose predictions we can explain.
What Makes an Explanation Good?
A good explanation should be intelligible to its intended audience, and it should be useful, in the sense that it helps that audience achieve their goals. When it comes to explainable AI, there are a variety of stakeholders that might need to understand how an AI made a decision: regulators, end-users, data scientists, executives charged with protecting the organization’s brand, and impacted consumers, to name a few. All of these groups have different skill sets, knowledge, and goals — an average citizen wouldn’t likely understand a report intended for data scientists.
Continue reading: https://hbr.org/2022/08/when-and-why-you-should-explain-how-your-ai-works

Attachments

  • p0008880.m08469.aug22_31_1308454794.jpg
    p0008880.m08469.aug22_31_1308454794.jpg
    247.8 KB · Views: 40
  • Like
Reactions: Brianna White

Edge Computing Enters the Thin vs. Thick Debate

Despite the growth and variety in edge computing device availability, the path toward thin or thick edge isn’t exactly clear.
No one would argue that Internet of Things (IoT) deployments are headed toward the edge. Into more remote, uncontrolled environments, where computers—and especially edge computing power—haven’t traditionally gone before.
Better hardware and connectivity pair together to drive more computer power to the edge, which means organizations can do a lot more work. And that, in turn, is opening up brand-new use cases for organizations that would have never considered this infrastructure. The edge computing industry is maturing to the point where its various hardware, software, and connectivity combinations inevitably create a more fractured environment.
To highlight that degree of change, a Gartner survey of 500 IT leaders showed that their organizations invested, on average, $417,000 on IoT and $262,000 on edge computing throughout 2021. Those figures will inevitably grow, with Grand View Research, Inc. publishing a report stating that the global edge computing market would grow 38% annually through 2028, with a massive focus on edge servers.
One of these divergent paths in edge computing, particularly when it comes to IoT, is thin vs. thick edge deployments, and knowing which one your organization actually needs—or whether the ideal form is a hybrid of the two—will pay dividends when you don’t get bogged down in rework or redeployment costs six months after pushing the power button for the first time.
Where is the edge between thin and thick?
First thing’s first, we’re focused on IoT-based infrastructure here, not anything related to distributed cloud computing, like content delivery networks (CDNs) or gamelets, which leverage the number of public cloud data centers to process requests closer to their users.
And the thin/thick distinction isn’t a competitive one—both types of edge deployments have many valuable use cases. Think of them as product categories, not philosophies, the way you choose between a desktop computer for ultimate power or a laptop for the convenience of portability.
Edge computing began in the realm of thin. Think of these as low-power and low-memory devices that transmit data to a centralized system for analysis or processing with minimal latency. It’s how most organizations are collecting streaming data to support real-time analytics.
If thin edge devices do take action independently, it’s generally simple, like responding to a light or motion sensor by taking a simple action. They’ve also been heavily employed in larger organizations as they’ve tried to connect their legacy or on-premises equipment to their public cloud, with thin edge devices acting as a “bridge” that securely passes requests and data between local hardware and the public cloud.
Continue reading: https://www.rtinsights.com/edge-computing-enters-the-thin-vs-thick-debate/

Attachments

  • p0008879.m08468.n6uygfqe_400x400.jpg
    p0008879.m08468.n6uygfqe_400x400.jpg
    12.9 KB · Views: 52
  • Like
Reactions: Kathleen Martin

Advancing Security | The Age of AI & Machine Learning in Cybersecurity

Modern cyber attackers’ tactics, techniques, and procedures (TTPs) have become both rapid and abundant while advanced threats such as ransomwarecryptojackingphishing, and software supply chain attacks are on an explosive rise. The increasing dependence global workforces have on digital resources adds another facet to a growing cyber attack surface we all now share. In an effort to stand up to these challenges, businesses task their CISOs with developing, maintaining, and constantly updating their cybersecurity strategies and solutions.
From a tactical standpoint, CISOs ensure that their business’s security architecture can withstand the ever-shifting modern threat landscape. This means choosing the right tool stack that is capable of combating complex cyber threats at the breakneck speed in which they appear. As single-layer, reactive security solutions can no longer keep up with increasingly sophisticated cybercriminals, CISOs now have to stack multi-layered and proactive solutions together to build an adequate defense posture.
Advanced Threats Call for Advanced Solutions
Today, many CISOs know that artificial intelligence (AI) and machine learning (ML) are needed to accelerate and automate the quick decision-making process needed to identify and respond to advanced cyber threats. AI is designed to give computers the responsive capability of the human mind. The ML discipline falls under the umbrella of AI. It continuously analyzes data to find existing patterns of behavior to form decisions and conclusions and, ultimately, detect novel malware.
The task of building the right security stack is also one constantly under discussion, even on a federal level. In May 2022, the U.S. Senate Armed Forces Committee’s Subcommittee on Cyber held a congressional hearing on the importance of leveraging artificial intelligence and machine learning within the cyberspace. This hearing, including representatives from Google and the Center for Security and Emerging Technology at Georgetown University, discussed the use of AI and ML to defend against adversary attacks, effectively organize data, and process millions of attack vectors per second, far surpassing any human-only capability at threat detection.
Continue reading: https://www.sentinelone.com/blog/advancing-security-the-age-of-ai-machine-learning-in-cybersecurity/

Attachments

  • p0008878.m08467.advancing_security_the_age_of_ai_machine_learning_in_cybersecurity_8.jpg
    p0008878.m08467.advancing_security_the_age_of_ai_machine_learning_in_cybersecurity_8.jpg
    49.9 KB · Views: 39
  • Like
Reactions: Brianna White

Let’s Get It Right This Time: Applying Experiences Of Shadow IT To Edge Computing

The early 2000s saw the rise of the thought-provoking term “Shadow IT” to describe how application teams started deploying to their own infrastructure to avoid the perceived shortcomings of the processes and systems provided by central IT. I now see some anti-patterns applied to edge computing from that era.
The early version of cloud computing was simply the ability to get access to on-demand virtual machines hosted on someone else’s computers. Amazon was arguably first out of the gates with the commercial launch of its Elastic Compute Cloud (EC2) where users could create, launch and terminate VM-backed server instances for their use on the internet in a matter of minutes and pay by credit card.
At the time, many IT organizations did not grasp that virtualization in itself did not make applications more valuable to the organization, but it made operating them significantly more cost-efficient and convenient for the application teams. So IT teams kept waiting for the first proof-point application where the fact that the application was running in a VM was in and of itself valuable. This was, of course, never the point of virtualization.
At the same time, application teams started rapidly moving to the cloud (i.e., moving applications to run on programmatically provisioned and rented VMs) on externally hosted platforms. This allowed them to avoid the standard IT practices at the time that often involved multiweek delivery times and lengthy paperwork. The backsides of shortcutting IT-anchored processes included the risk of data loss or leaks, risk of inefficiencies, compliance issues and general organizational dysfunction.
While the benefits of using the type of flexible infrastructure provided by the early-stage cloud providers were evident to the application teams, there was a growing concern over issues related to operations and security. Some applications could be moved to the nascent cloud platforms without significant impact while others (e.g., file sharing, storage and collaboration tools) presented big risks to enterprises and their sensitive data.
Eventually, the IT organizations caught up with the disconnect between the services and infrastructure they offered and what application teams needed, and started shifting the focus from “building everything” to “let us show you how to build it.” Quite often, IT teams became competence centers for business practices on a best-of-breed combination of internally and externally managed infrastructure.
Continue reading: https://www.forbes.com/sites/forbestechcouncil/2022/08/31/lets-get-it-right-this-time-applying-experiences-of-shadow-it-to-edge-computing/?sh=56c4501afc27

Attachments

  • p0008877.m08466.960x0_7.jpg
    p0008877.m08466.960x0_7.jpg
    39.9 KB · Views: 45
  • Like
Reactions: Kathleen Martin

Why DataOps-Centered Engineering is the Future of Data

DataOps will soon become integral to data engineering, influencing the future of data. Many organizations today still struggle to harness data and analytics to gain actionable insights. By centering DataOps in their processes, data engineers will lead businesses to success, building the infrastructure required for automation, agility and better decision-making.
DataOps is a set of practices and technologies that operationalizes data management to deliver continuous data for modern analytics in the face of constant change. DataOps streamlines processes and automatically organizes what would otherwise be chaotic data sets, continuously yielding demonstrable value to the business.
A well-designed DataOps program enables organizations to identify and collect data from all data sources, integrate new data into data pipelines, and make data collected from various sources available to all users. It centralizes data and eliminates data silos.
Operalization, through XOps including DataOps, adds significant value to businesses and can be especially useful to companies deploying machine learning and AI. 95% of tech leaders consider AI to be important in their digital transformations, but 70% of companies report no valuable return on their AI investments.
With the power of cloud computing, business intelligence (BI) – once restricted to reporting on past transactions – has evolved into modern data analytics operating in real-time, at the speed of business. In addition to analytics’ diagnostic and descriptive capabilities, machine learning and AI enable the ability to be predictive and prescriptive so companies can generate revenue and stay competitive.

 
(R-Type/Shutterstock)
However, by harnessing DataOps, companies can realize greater AI adoption—and reap the rewards it will provide in the future.
To understand why DataOps is our ticket to the future, let’s take a few steps back.
Why Operationalization is Key
A comprehensive data engineering platform provides foundational architecture that reinforces existing ops disciplines—DataOps, DevOps, MLOps and Xops—under a single, well-managed umbrella.
Without DevOps operationalization, apps are too often developed and managed in a silo. Under a siloed approach, disparate parts of the business are often disconnected. For example, your engineering team could be perfecting something without sufficient business input because they lack the connectivity to continuously test and iterate. The absence of operationalization will result in downtime if there are any post-production errors.
Continue reading:https://www.datanami.com/2022/08/31/why-dataops-centered-engineering-is-the-future-of-data/ 

Attachments

  • p0008876.m08465.data_integrationshutterstock_3dreams_600x450.jpg
    p0008876.m08465.data_integrationshutterstock_3dreams_600x450.jpg
    66 KB · Views: 43
  • Like
Reactions: Kathleen Martin

Integrating CX Into Data Management

The Four Ts of Integrating Data Management and Customer Experience
We’re now living in an era of hyper-personalized service. In exchange for sharing data, customers across industries now expect companies to take their activities, preferences, and histories with the company into account at every touchpoint to offer the most streamlined services possible. As such, having access to customer data is the key to avoiding pain points, resolving customer concerns, and delivering a top-of-the-line customer experience (CX). However, collecting, storing, and using customer data comes with its own set of complications.
The mismanagement of data is, understandably, a major concern for business leaders and consumers alike. With the seemingly constant barrage of news stories centered on data breaches and customer exposures, all eyes are on corporations that deal in consumer data. Misusing or mismanaging that data can spell disaster for businesses, as these kinds of missteps can permanently alter the public perception of a company’s trustworthiness and reliability. Unfortunately, these stories don’t go away; the association between a brand and a data breach can grow deep roots in the public consciousness and interfere with business-as-usual.
By rethinking the relationship between data management and CX processes, companies can streamline experiences, protect customer data, and build public trust without investing in new systems or security measures. Rather than operating separately, cybersecurity and CX teams can work together to do their part in protecting a company’s most valuable assets.
Bridging the gap
Traditionally, data management and CX teams have operated independently. It makes some sense: data is for IT and analytics teams, while CX teams deal with the human element. However, this approach fundamentally misses the primary function of CX: to give customers confidence in the brand. At the end of the day, service representatives and other external touchpoints are the only way customers interact with a brand. To most customers, they are the brand. If customers feel they can’t trust service professionals with their data, they can’t trust the company with that data. It’s that simple.
Luckily bridging the gap between data management procedures and CX processes isn’t as large of an undertaking as many business leaders believe. Instead of allowing these two elements to operate independently, companies should view proper data management procedures and good CX as two parts of the same whole. Customer service, security, and data teams can work collaboratively to manage data and customer experiences more effectively.
Continue reading: https://martechseries.com/mts-insights/guest-authors/integrating-cx-into-data-management/

Attachments

  • p0008875.m08464.fara_haron_integrating_cx_into_data_management_750x430.jpg
    p0008875.m08464.fara_haron_integrating_cx_into_data_management_750x430.jpg
    21.9 KB · Views: 50
  • Like
Reactions: Kathleen Martin

Filter