Week 2
WHAT?
The core task of this week's workshop was to activate our university web hosting account and successfully create and upload the first web page, making it accessible via a public link.Before class, I read the course papers to learn about the basic concepts of HTML, CSS, and client-server architecture that I will learn this week. I completed the relevant part of the "Make a Website" course on Codecademy as required, gaining a preliminary theoretical understanding of web page structure and style.In the workshop, we mainly did the following:Following the guide, I accessed the leedsnewmedia.net/cpanel, reset my password, and successfully logged in to the cPanel control panel. After that I tried using FileZilla, connecting to the server. I created public_html folders and index.html files locally and wrote HTML code with essential elements like headings, paragraphs, etc. In the step of "connecting to the server", a common phenomenon occurred in the classroom: due to the different browser types and operating systems used by students, students failed to authenticate when configuring connection parameters.
SO WHAT?
This experience gave me a deeper understanding of the learning and practice of digital technology. The HTML/CSS knowledge learned before class is the theoretical blueprint for "building a web page", while the FTP connection process in the workshop is "paving the way to the construction site". I realized that understanding the full lifecycle of a digital product, from local development to online release, is just as important as the technical skills themselves. The different problems encountered by the students in the process of connection vividly demonstrate the view that "digital structures are constructed objects". A seemingly simple "upload file" action relies on the precise cooperation of a series of software and hardware environments (browser kernel, network settings of the operating system, and configuration of client software). The "black box" of technology is opened only when problems arise, allowing us to see the complexity and vulnerability within them. When faced with confusion, we do need to seek support, and the process itself is an important learning. As a first-time user, I have a first-hand experience of the "usability" of the website backend. Document 1 made us reflect on the ease of use of frequently visited websites, and the "product" I experienced this week was the university's hosting system. Are the error messages that cause the connection to fail clear? Is the operation process intuitive enough? This led me to think about technology tools from the perspective of designers, not just consumers, and understand the importance of optimizing user experience.
NOW WHAT?
I recognize that in the digital media space, encountering technical issues is the norm. I will see this experience with connectivity issues as a valuable exercise. In the future, when I encounter similar obstacles again, I will troubleshoot more systematically: check the network, verify settings, consult documentation (such as the guide in Document 2), use search engines, and have the courage to ask the community for help. This will help me develop the essential, independent problem-solving skills for a digital media scholar and practitioner. Continuous critical thinking: In the follow-up study, I will continue to combine the reading materials recommended in document 1 (such as the relevance of platform gravity and algorithms) to connect the technical experience of building the website with my own hands and think about it with broader social, cultural and ethical issues, so as to truly achieve the unity of knowledge and action. All in all, this week's workshop encountered some technical twists and turns, but it just gave me a deeper understanding of how websites are built, hosted, and finally presented to users. This has laid a solid and valuable practical foundation for my in-depth study in the field of digital media.
Week 3
WHAT?
At the beginning of the course, we first solved the connection problem with the guidance of the teacher and successfully linked the local code editor with our website hosting space in leedsnewmedia.net. This ensures that we can smoothly upload our HTML web files to public_html folder and access them through our own URL. This may seem like a basic operation, but it is a crucial step in turning ideas into online reality. Every headline, paragraph, image, and link we browse every day is an identifiable "data component". Subsequently, we used the pre-made crawler tool WebScraper to put it into practice. I chose the BBC iPlayer website as the target and used the tool to identify and scrape the metadata of all the video shows on the page, such as program names, descriptions, and other information.
SO WHAT?
The meaning of this class is much more than learning a new technology, it fundamentally changed my cognition. I used to browse the web and was just a passive information consumer. Now, I have learned to look at web pages with a "researcher's eye". When I looked at BBC iPlayer, I stopped thinking about "what's available" and started thinking "how are these shows categorized and presented?" "What kind of content strategy does the page structure reflect in the BBC?" and "Can I analyze trends in popular shows by comparing data at different points in time?" ”。 The network itself becomes a huge and open research database. When I easily scrape the data, I immediately think of the relevant ethical questions, such as where are the boundaries of how this data is used? Am I respecting the site's agreement and copyright? This made me realize that technical competence must go hand in hand with ethical responsibility. When I tried to scrape, I discovered that the key to success lies in accurately identifying the HTML tags and attributes of the package target data. This made me realize that the visual presentation of a web page is entirely dependent on its underlying code structure, and crawlers use this structure to extract information accurately. If the code structure is confusing, scraping becomes difficult; On the contrary, if the structure is clear, it will be a breeze.
NOW WHAT?
I will begin to consciously think about how web crawlers can be applied to research questions that interest me. For example, can I scrape comments on a hot event on a news site for sentiment analysis? Or collect product information from different e-commerce platforms and conduct price comparison research? The skills I learned this week opened new doors for me and opened my eyes to the great potential of digital research methods in social sciences, media studies, and more. In conclusion, this week's course not only taught me a practical technical skill, but more importantly, gave me a whole new way of thinking - seeing the network as a data-driven field to explore and question. I look forward to continuing to build my digital research skills in the next courses.
Week 4
WHAT?
In this week's "Data and Data Analysis" workshop, the course focuses on the definition, classification and power of data in society. We first learned key terms such as "data", "dataset", and "data collection", and referenced the perspective of Crawford (2021) and feminist datasets, emphasizing that data classification is an act of "building the world". In the hands-on session, my group was assigned to scenario 1: company-led data collection. We play as a user researcher for a generative AI startup focused on web creation. Our mission was to design a survey to understand how students use generative AI to identify gaps in the market. The core content of our panel discussion included clarifying that we are collecting data to develop an AI tool that can better assist students in web design and development, deciding what data to collect, such as how often students use AI for web page creation, what specific tasks (such as generating code, designing layouts, debugging), and how satisfied they are with existing tools and whether they are willing to pay, as well as how to obtain informed consent, ensure data anonymity, and avoid introducing bias in data collection. After the group sharing, the teacher provided feedback on our proposal and guided us to think deeply about how the questionnaire design affects the quality of the data and subsequent analysis.
SO WHAT?
This class gave me a deep understanding of the complex ethical dimensions and power relations behind data practices. When our group decides what questions to ask and what options to offer, we are essentially delineating and categorizing the phenomenon of "students using AI". For example, if we only provide options such as "code generation" and "copywriting", we may ignore other uses such as "creative inspiration" or "project management" for students to use AI, thereby erasing these possibilities in our "world". When we discussed "how to ensure consent", we realized that it was not enough to just write "anonymity for this survey" at the beginning of the questionnaire, it needed to clearly state what the data was used for, how long it was stored, and who would access the data. This reminds me that as data collectors, we have a responsibility to treat participants' data transparently and ethically, just as we would expect others to treat our data. At the same time, the specific role of "company researcher" has profoundly influenced our data collection goals. Our core drivers are "identifying market gaps" and "business interests", which are fundamentally different from "improving services" in scenario 2 (universities) or "understanding habits" in scenario 3 (academics). This made me understand that there is no absolute "neutral" data. The collection of data always serves the goals and values of a certain subject, which confirms the rationality of the feminist view of data to question the objectivity of data.
NOW WHAT?
Based on the inspiration of this course, in the future, whether I am conducting academic research, reading data analysis reports, or simply clicking "Agree" to the terms of service of an app, I will habitually ask myself: How is this data defined? Who defines it? What sounds may have been missing? What interests do they serve? This mindset will help me become a more responsible data producer and a more insightful data consumer.
Week 5
WHAT?
The core theme of this week's workshop is "Data Visualization." At the beginning of the lesson, the teacher first guided us through the previous week's data collection tasks, allowing us to think about what the collected data could be used for. After our group discussion, we believe that if the data is comprehensive enough, it can be shared with potential investors as a powerful tool to demonstrate market potential and user needs, thereby promoting the company's growth. Subsequently, the teacher systematically introduced the basic concepts of data visualization, including keywords such as datasets, variables, and chart types. We analyze the advantages and disadvantages of different chart formats (such as bar charts, pie charts, line charts) and their applicable scenarios. The teacher demonstrated how to convert raw data into charts using Microsoft Excel, but I personally found it difficult to operate independently even after understanding the principles due to my unfamiliarity with Excel's functions and formulas. Next, the teacher introduced more professional visualization tools and asked us to visualize the data collected through the questionnaire last week. Finally, we compare the charts we created with those automatically generated by survey software such as Google Forms or Microsoft Forms to analyze the differences in message delivery, design aesthetics, and storytelling capabilities.
SO WHAT?
Data visualization is much more than "drawing" but a critical narrative and persuasion process. What charts to choose, which variables to highlight, and what colors to use are not just technical choices, but also expressions of ideologies and opinions. Our group thought of using data to convince investors, which in itself meant that visualization needed to serve a clear audience (investors) and purpose (to secure investments). Although the charts automatically generated by questionnaire software are accurate and fast, they are often generic and rigid; Manually creating visualizations allows us to craft "key messages" that guide the audience to the conclusions we want to emphasize. Collecting data is only the first step, and clarifying "what story do we want to tell with data" and "how to tell this story well" is the core of impact. Who is the audience? What do you want them to feel or do? What key messages need to be conveyed? This process itself is a deep data analysis. My difficulties with Excel made me realize that turning ideas into practice requires solid tooling skills.
NOW WHAT?
In future data projects, I will always think about possible biases in my data and strive for inclusivity and fairness in the design of visualizations to avoid reinforcing stereotypes. Before I start any data analysis or visualization task, I first think about "audience and purpose". This will ensure that my work is clearly strategic and communicatively oriented from the start, not just a showcase of technology. Good data visualization is a fusion of science, art, and strategy. It's the key bridge that transforms icy numbers into warm, persuasive, and responsible stories. I look forward to applying these insights in future studies and research.
Week 8
WHAT?
In the class, we first discussed the six categories of data collected by the platform (content, demographics, location, search, browsing profile, biometrics) and explored their motivations – mainly for targeted advertising and content recommendations. I tried to find an explanation similar to Facebook's "why did I see this ad" on my usual platform "Xiaohongshu" according to the task requirements, but found that it was less transparent, mainly providing "content preferences" settings, allowing users to actively adjust their areas of interest rather than explaining the data reasoning logic behind it. Subsequently, we practiced the method proposed by David Sumpter: manually categorize the last 15 posts from 32 friends. This process exposes the subjectivity inherent in the act of classification itself. My classification criteria are likely different from another classmate's, which directly affects what the final data points and visualizations look like.
SO WHAT?
The first part of the task made me realize that algorithmic identity is not our true and complete self, but a fluid, biased model based on quantifiable data points. The platform "sees" only the data it chooses to count and can count (e.g., likes, dwell time, links clicked), but "doesn't" our deep motivations, complex emotions, or the rich context of offline life. Xiaohongshu's "content preferences" function seems to give users control, but in fact it also simplifies identity into selectable interest tags, further strengthening this modeled identity construction. The second part of the task brings more critical revelations, algorithms are not objective and neutral "laws of nature", and their core logic itself is full of human subjective judgment. Sumpter's 13 categories are an artificial and potentially imperfect framework in their own right. The confusion I encountered in classification simulates the challenges faced by platform algorithm engineers when designing classification models. Therefore, the platform's portrait of users is bound to be biased, because subjective choices are embedded in every link from data collection (choosing what to count) to data processing (how to classify). We aspire to platform algorithms that "truly understand us", but the context, ambiguity, and contradiction required to understand human identity are difficult to capture with current algorithmic models based on classification and association. Therefore, all we get is a practical portrait that is convenient for the platform to distribute content and advertise.
NOW WHAT?
In the future, when I see content recommendations or ads on social media, I will no longer simply think that "the algorithm really understands me", but will see it as a window into my "algorithmic identity". I ask myself what actions did the platform base this inference on? This inference reflects which parts of my true self are missing or distorting. While data collection cannot be completely avoided, I think more carefully about what I share and its data. I will be more active in using the privacy and preference tools provided by the platform (such as Xiaohongshu's interest adjustment), although I know this is only fine-tuning within the rules set by the platform, but it is a necessary practice of one's own data subject rights. At the same time, any classification system and algorithm model carries the values and assumptions of the designer. In the future, if I engage in similar work, I must remain reflective, think proactively and articulate the biases that my taxonomy may introduce, and try to make my model more "rich" and "human" rather than pretending to be absolutely objective.