Rhizomatic Learning

I saw the term " rhizomatic learning" used in an article about digital pedagogy. I know about rhizomes because I am a gardener but the use of it for learning was new and not immediately clear.

As introduced by the philosophers Gilles Deleuze and Felix Guattari, the rhizome takes that botanical term that refers to a root structure that expands and connects in multiple directions. It creates a decentralized, horizontal structure. Applying it to learning, particularly in higher education, means that students navigate their learning based on the cognitive conflicts they encounter.

Rhizomatic learning encourages students to acquire knowledge through the interconnectedness of curricular content, prompting them to explore diverse perspectives and methods. This is not a traditional approach or path but one that can lead to a critical, reflective learning experience.

iris rhizomes
September is when I divide my iris rhizomes based on their nodes - a common networking term too.

Rhizomes help plants spread and survive in various conditions. Rhizomes often store nutrients and energy, allowing the plant to regrow if above-ground parts are damaged or destroyed. Unlike roots, rhizomes have nodes from which new shoots and roots can emerge. In my garden, I am most familiar with the types of iris plants that have rhizomes. Other examples are ginger and the part of ginger we use as a spice is a rhizome. Many species of bamboo spread via rhizomes, which can form dense clusters and cover large areas. Near water, you often find cattails (Typha spp.), a wetland plant that has rhizomes that anchor them in muddy soils and help them spread across wetlands.

Applying this concept to the principles of critical pedagogy and to generative AI could offer a new dimension to the relationship between learning situations and the digitization of learning processes. The rhizome, in this framework, symbolizes a non-hierarchical, decentralized network of ideas and knowledge, in contrast to traditional, linear models of learning.

The term is new to me but the idea is not completely new. I have used approaches that seem to fit into this framework.

Platforms like forums, social media, and MOOCs (Massive Open Online Courses) and Online Learning Communities often embody rhizomatic principles, where learners can pursue diverse interests and create their learning paths.

PBL (project-based learning) students explore real-world problems and collaborate on projects, allowing for a more flexible, student-driven approach to acquiring knowledge.

Inquiry-based learning is an approach that encourages students to ask questions, conduct research, and explore topics of interest, promoting a more decentralized and learner-directed way of learning.

Learning that can be described as Self-Directed Learning where individuals take charge of their own learning journeys, choosing what and how they learn based on their personal goals and interests, are engaging in rhizomatic learning.

Bias in AI

AI thermostat and couple
What is that smart thermostat doing with our data? Can a thermostat have a bias?

I read an article from Rutgers University-Camden that begins by saying that most people now realize that artificial intelligence has become increasingly embedded in everyday life. This has created concerns. One of those lesser-spoken-about concerns is around bias in its programming. Here are some excerpts from the article.

Some AI tasks are so innocuous that users don't think about AI being involved. Your email uses it. Your online searches use it. That smart thermostat uses it. Are those uses frightening? But as its capabilities expand, so does its potential for wide-ranging impact.

Organizations and corporations have jumped on the opportunity presented by AI and Big Data to automate processes and increase the speed and accuracy of decisions large and small. Market research has found that 84 percent of C-suite executives believe they must leverage artificial intelligence to achieve their growth objectives. Three out of four believe that if they don't take advantage of AI in the next five years, they risk going out of business entirely.

Bias can cause artificial intelligence to make decisions that are systematically unfair to particular groups of people, and researchers have found this can cause real harm. The Rutgers–Camden researcher Iman Dehzangi, says that “Artificial intelligence and machine learning are poised to create valuable opportunities by automating or accelerating many different tasks and processes. One of the challenges, however, is to overcome potential pitfalls such as bias.” 

What does biased AI do? It can give consistently different outputs for certain groups compared to others. It can discriminate based on race, gender, biological sex, nationality, social class, or many other factors.

Of course it is human beings who choose the data that algorithms use and humans have biases whether they are conscious of them or not.

"Because machine learning is dependent upon data, if the data is biased or flawed, the patterns discerned by the program and the decisions made as a result will be biased, too," said Dehzangi, pointing to a common saying in the tech industry: "garbage in, garbage out." “There is not a successful business in operation today that is not using AI and machine learning,” said Dehzangi. Whether it is making financial investments in the stock market, facilitating the product development life cycle, or maximizing inventory management, forward-thinking businesses are leveraging this new technology to remain competitive and ahead of the curve. However, if they fail to account for bias in these emerging technologies, they could fall even further behind, remaining mired in the flawed data of the past. Research has revealed that if care is not taken in the design and implementation of AI systems, longstanding social biases can be embedded in the systems' logic.