I have been getting a particular advertisement showing up in my podcast rotation for a while now, and not only is the advertisement contrived, it seems to be selling a service that claims quite an impressive list of things. Perhaps I have read one too many
synecdochic essays, or I have been hanging around too many people who have to examine services from the lens of "how could someone with malice and planning break this service?" but I had suspicions, and decided I would take a closer look at the service's claims to see if there were places where, were I a person with malice and planning, I could cause problems on purpose. [Honk.]
The service in question is called Zigazoo, pronounced zig-ah-zoo, and in the podcast advertisements and on the website, it claims to be "The World's Largest Social Network for Kids!"
The podcast advertisement purports to have a conversation between two children, one who is saying they saw the other's Zigazoo content and thinks the other person is really talented (while the other person has an "OMG, you watched it?" reaction ro finding this out.) That's one vector where things could immediately go pear-shaped, if the kids that are posting know each other outside of the platform, but that's not something that Zigazoo can control, so it would be unfair to hit them for anything other than portraying an unrealistic conversations between children.
Part of the advertisement, and the website, proudly proclaims that they are a "kidSAFE COPPA-certified" site, which apparently means that in addition to some safety rules, they follow additional rules and privacy practices that brings them theoretically in compliance with the U.S. Children's Online Privacy Protection Act, which itself actually only requires a verified parental permission before a company can collect or use an under-13 child's personal information online and/or give them access to advertising, community forums, and social networking features. If COPPA sounds familiar to the audience, you might be remembering that Dreamwidth, along with other Netchoice members, is providing information and examples to Netchoice so that they can try to get overbroad, unenforceable, and burdensome laws overturned, and that many of these laws model themselves in the same vein as COPPA in having strict parental permission and notification requirements and giving parents significant control over the online activities of their children…or they would, had they rubbed sufficient braincells together to craft legislation that did the things they claimed it did. Some of the possible griefing techniques outlined in the
dw_news post might apply here.
The verification methods that Zigazoo offers are first, a phone number or a Google/Apple account, and then, secondly, either a credit card number (thus indicating someone here is an adult and gives their permission) or a video with the grownup and the child together following some on-screen prompts (and therefore, supposedly, this child and adult are related and the adult is giving their permission for the child to have the account.) So there's the possibility that one grownup gave permission and another did not, or that the child used a completely unrelated grownup, or that a grownup is using an unrelated child for verification. Or that the person with the credit card is of age to have one, but is not necessarily related to the child. I have no idea how large this attack surface is and how extensive the verification underneath is to make sure that the grownup and the child in the video, or the person with the credit card and the child, are actually related to each other in ways that would allow that grownup to give consent for the child to use the app.
So, assuming a child and a grownup have cleared the necessary hurdles to get verified and further into the app and the network, then we have to consider what's actually going to be present for them when they get there. This is where Zigazoo makes their two largest claims:
Which first brings to mind the SpeedChat Corollary:
Additionally, with the claim that all the video content has to go through human moderators, well, even if none of the content will make it to the platform itself, I certainly could do a lot of damage to the moderators by making them watch as inappropriate of content as I could come up with. Or, if I'm a child, I will probe as best as I can to find what I can get away with saying or doing that will make it through the human moderators. Because it is absolutely possible to construct something that looks genuinely positive, but to those who understand everything in the shot and everything said, it is anything but. And while we can hope that the adult moderators are paid well, get to take breaks from having to moderate, and are keeping up on kid lingo, they are trying to moderate beings who are endlessly creative and will be trying to find where the line is, if they are someone interested in testing boundaries or dropping disses that sound nothing like a negative statement. Even if there's no editing allowed. Maybe a particular filter becomes the shorthand for "this is opposite video time!" and then is filled with all kinds of sweet things being said completely insincerely, but convincingly. And if that filter gets squashed, then another one will become the signal.
I also wonder if the app allows users to block each other or otherwise indicate they don't want to interact. Because if all of the interaction tools are positive, then why would you need to block someone for using positive tools, right? I forget exactly where it was mentioned on the Internet, but I know I've heard it from
azurelunatic at least, that receiving a dozen roses from your lover at your house is sweet, but receiving a dozen roses from your stalker at your house is terrifying. (And a similar something about LiveJournal's virtual gifts, as I recall, and how such things needed to be refusable, because otherwise they could become a vector for abuse. Like the person in our current times who tips / sends you a small amount of money on a cash app for the ability to fill the comment box with all the vituperation and hate they can stuff into it, and for a fairly small price, no less.)
So I really hope the people who do the moderation work for Zigazoo know what they're getting into, and what things they might have to see and report upon at the inevitable point that someone finds an attack and abuse vector toward the people who are on the platform, either one that they don't care the account gets suspended over, because they've gotten their jollies traumatizing the content moderators, or one where they've figured out how to slip all the viciousness they can past the human moderators.
Obviously, no social site is a utopia of positivity and good cheer, and those that think they can technology their way into one or restrict the users sufficiently that they can only interact positively and only see positive things are forgetting the endless creativity of humans.
There's another issue with Zigazoo that has nothing to do with their moderation policies or their verification process or their misguided belief that they can keep their users from expressing their true opinions of each other, and I think this one is much more easily abusable. On their main page, in addition to being only in the app stores, they have a prominent advertisement for brands to partner with them for
…and I think I already know how they intend to pay for all that video content that's going on the platform, since video is expensive. They've got a captive audience right there, and even better, their grownups have already given permission for the kids to see all kinds of advertisements. Obviously, I don't expect to see Playboy or Penthouse trying to forge partnerships with Zigazoo, but those firms that are making partnerships there are clearly doing so for the ability to have access to kids for their market research purposes and to try and get the kids to sell their parents on their products. (Or convince the parents that their products are somehow the best for those kids.) This particular model of partnerships and advertising might manage to sidestep many of the multiple part analysis (that's three parts there) that
synecdochic presciently posted right around the beginning of Dreamwidth about the inability of social media sites to keep the lights on and pay the bills with advertising dollars alone. Given that Zigazoo claims that all their users are verified kids, the advertising can probably target a little bit more effectively than it might in an otherwise regular social media site. I'm not sure that it will be enough to make things profitable, but it might lose money more slowly.
What's keeping the kids there is not the promise of a positive social media experience, or the things that are being touted at parents that say this social media experience is going to be better than other ones, because of the moderation and the restrictions. What's keeping the kids there is the promise of filthy lucre. Zigazoo offers an "exclusive Creator Club", which offers the following benefits to those accepted:
Because "paid social media influencer" is something that we want to dangle in front of kids, to ask them to do the work of convincing other kids to buy products or to get their parents to buy them products. If I were a parent who was trying to find a good social media site for my child, and my objection was that the kids are already getting bombarded with too many ads from their current media consumption, I can't say that Zigazoo is going to instill me with the belief that they are, in fact, a better social media experience than any of the other places that are around. I sincerely hope that all the branded sponsored content and the affiliate content is very clearly marked and understandable to the kids who watch the video that this person whose videos they've been watching is getting paid to make this video or to wear these clothes and so forth. It would be particularly slimy marketing for kids to be creating sponsored and affiliate content that is then distributed to other kids without that clarity.
There's one other thing to examine about Zigazoo, and it's that, right there at the bottom of all of their pages, is in innocuous link entitled "Zigazoo Challenges." Since this is the kind of place that proclaims that it has challenges for people to do, and prizes to win (in addition to the affiliate program material already discussed) as a way of getting content recorded on the app, I find it not particularly good that the challenge list appears to instead be an alphabetically-arranged list of titles of videos uploaded to the service, in their various buckets in the site map. There's the username and their bio underneath the video link. And some of those bios have personally-identifiable information in them, like age, name, birthday, and the like. (Whether they're real or not is something that would have to be investigated, but still...) Many of them also mention other sites where they might be found, like YouTube. Because why not?
I can't directly access their user pages from the website with casual effort, but I may have gathered the structure of the URLs anyway, because trying one method gave me a blank page with the "get the app in the app stores" buttons and another method gave me a 404 error, so I'm guessing the structure was right, but there is at least some protection in place to prevent an unauthorized connection from collecting profile data. Someone with more time, patience, and/or malice than me might try to see if that protection could be broken, and how trivially it would need to be. Or, possibly, if they were signed in to the app, the webpage would display just fine for them, because then it became an authorized connection. There's a lot here just for me, randomly poking through the possible challenge prompts. There may be even more that's not related to the challenges, but still, that's a lot of videos available on the drive-by for anyone to see. Better hope those moderators were on target, right, and they were able to scrub anything that might have been identifying from all the videos that have been submitted so far, and that there's not anything in someone's bio and videos together that would make identification possible, right?
I freely admit, the amount of social media time that I think is appropriate for children until they are sufficiently understanding of what they might encounter online and what they should do if they hit the parts of online that are not well-behaved is none. And that "none" includes grownups putting them on social media. Anything relayed online about that kid should be done with permission and understanding. If the kid cannot understand or give permission, or doesn't, then it doesn't go there. So I am inclined to find the concept of Zigazoo itself terrible and ask what the people who created it were thinking they were going to do. Child-safe Tiktok? If that's what they were aiming for, there's still some work to go, I think, in the "Child-safe" part of it. I suspect the actual answer of what Zigazoo is for is the advertising dollars being spent to entice kids and grownups and the opportunity for companies to sponsor children with their products. And that is very much not child-safe, even if it is COPPA-compliant.
The service in question is called Zigazoo, pronounced zig-ah-zoo, and in the podcast advertisements and on the website, it claims to be "The World's Largest Social Network for Kids!"
The podcast advertisement purports to have a conversation between two children, one who is saying they saw the other's Zigazoo content and thinks the other person is really talented (while the other person has an "OMG, you watched it?" reaction ro finding this out.) That's one vector where things could immediately go pear-shaped, if the kids that are posting know each other outside of the platform, but that's not something that Zigazoo can control, so it would be unfair to hit them for anything other than portraying an unrealistic conversations between children.
Part of the advertisement, and the website, proudly proclaims that they are a "kidSAFE COPPA-certified" site, which apparently means that in addition to some safety rules, they follow additional rules and privacy practices that brings them theoretically in compliance with the U.S. Children's Online Privacy Protection Act, which itself actually only requires a verified parental permission before a company can collect or use an under-13 child's personal information online and/or give them access to advertising, community forums, and social networking features. If COPPA sounds familiar to the audience, you might be remembering that Dreamwidth, along with other Netchoice members, is providing information and examples to Netchoice so that they can try to get overbroad, unenforceable, and burdensome laws overturned, and that many of these laws model themselves in the same vein as COPPA in having strict parental permission and notification requirements and giving parents significant control over the online activities of their children…or they would, had they rubbed sufficient braincells together to craft legislation that did the things they claimed it did. Some of the possible griefing techniques outlined in the
The verification methods that Zigazoo offers are first, a phone number or a Google/Apple account, and then, secondly, either a credit card number (thus indicating someone here is an adult and gives their permission) or a video with the grownup and the child together following some on-screen prompts (and therefore, supposedly, this child and adult are related and the adult is giving their permission for the child to have the account.) So there's the possibility that one grownup gave permission and another did not, or that the child used a completely unrelated grownup, or that a grownup is using an unrelated child for verification. Or that the person with the credit card is of age to have one, but is not necessarily related to the child. I have no idea how large this attack surface is and how extensive the verification underneath is to make sure that the grownup and the child in the video, or the person with the credit card and the child, are actually related to each other in ways that would allow that grownup to give consent for the child to use the app.
So, assuming a child and a grownup have cleared the necessary hurdles to get verified and further into the app and the network, then we have to consider what's actually going to be present for them when they get there. This is where Zigazoo makes their two largest claims:
- All of the content posted to the platform is first run through a "stringent human moderation process" to ensure "users can't watch negative, harmful content"
- Kids can only interact with each other in positive ways, with "no text messaging or commenting capabilities" and "postive-only emojis, stickers, and shoutouts that promote healthy online relationships."
Which first brings to mind the SpeedChat Corollary:
By hook, or by crook, customers will always find a way to connect with each other.which references well-known methods used in Toontown Online, a Disney game that only had pre-selected phrases and actions that were supposed to prevent harassment, to communicate a code that would allow users to text chat with each other instead of only using the stock phrases. Even if only given positive emojis, stickers, and shoutouts, I'm fairly certain that enterprising children will find ways to communicate the entire range of their feelings to each other on the platform, rather than off of it.
Additionally, with the claim that all the video content has to go through human moderators, well, even if none of the content will make it to the platform itself, I certainly could do a lot of damage to the moderators by making them watch as inappropriate of content as I could come up with. Or, if I'm a child, I will probe as best as I can to find what I can get away with saying or doing that will make it through the human moderators. Because it is absolutely possible to construct something that looks genuinely positive, but to those who understand everything in the shot and everything said, it is anything but. And while we can hope that the adult moderators are paid well, get to take breaks from having to moderate, and are keeping up on kid lingo, they are trying to moderate beings who are endlessly creative and will be trying to find where the line is, if they are someone interested in testing boundaries or dropping disses that sound nothing like a negative statement. Even if there's no editing allowed. Maybe a particular filter becomes the shorthand for "this is opposite video time!" and then is filled with all kinds of sweet things being said completely insincerely, but convincingly. And if that filter gets squashed, then another one will become the signal.
I also wonder if the app allows users to block each other or otherwise indicate they don't want to interact. Because if all of the interaction tools are positive, then why would you need to block someone for using positive tools, right? I forget exactly where it was mentioned on the Internet, but I know I've heard it from
So I really hope the people who do the moderation work for Zigazoo know what they're getting into, and what things they might have to see and report upon at the inevitable point that someone finds an attack and abuse vector toward the people who are on the platform, either one that they don't care the account gets suspended over, because they've gotten their jollies traumatizing the content moderators, or one where they've figured out how to slip all the viciousness they can past the human moderators.
Obviously, no social site is a utopia of positivity and good cheer, and those that think they can technology their way into one or restrict the users sufficiently that they can only interact positively and only see positive things are forgetting the endless creativity of humans.
There's another issue with Zigazoo that has nothing to do with their moderation policies or their verification process or their misguided belief that they can keep their users from expressing their true opinions of each other, and I think this one is much more easily abusable. On their main page, in addition to being only in the app stores, they have a prominent advertisement for brands to partner with them for
- Branded Content Channels
- Marketing Campaigns
- Market Intelligence
- Access to Kid Creators
- Product Gifting/Sampling
- Access to direct feedback from real kids and families
- Parent-Targeted Media
…and I think I already know how they intend to pay for all that video content that's going on the platform, since video is expensive. They've got a captive audience right there, and even better, their grownups have already given permission for the kids to see all kinds of advertisements. Obviously, I don't expect to see Playboy or Penthouse trying to forge partnerships with Zigazoo, but those firms that are making partnerships there are clearly doing so for the ability to have access to kids for their market research purposes and to try and get the kids to sell their parents on their products. (Or convince the parents that their products are somehow the best for those kids.) This particular model of partnerships and advertising might manage to sidestep many of the multiple part analysis (that's three parts there) that
What's keeping the kids there is not the promise of a positive social media experience, or the things that are being touted at parents that say this social media experience is going to be better than other ones, because of the moderation and the restrictions. What's keeping the kids there is the promise of filthy lucre. Zigazoo offers an "exclusive Creator Club", which offers the following benefits to those accepted:
- Brand Connect program $$ for sponsored branded content
- Affiliate marketing program $$
- Exclusive swag
- Early access to celebrity guest videos
Because "paid social media influencer" is something that we want to dangle in front of kids, to ask them to do the work of convincing other kids to buy products or to get their parents to buy them products. If I were a parent who was trying to find a good social media site for my child, and my objection was that the kids are already getting bombarded with too many ads from their current media consumption, I can't say that Zigazoo is going to instill me with the belief that they are, in fact, a better social media experience than any of the other places that are around. I sincerely hope that all the branded sponsored content and the affiliate content is very clearly marked and understandable to the kids who watch the video that this person whose videos they've been watching is getting paid to make this video or to wear these clothes and so forth. It would be particularly slimy marketing for kids to be creating sponsored and affiliate content that is then distributed to other kids without that clarity.
There's one other thing to examine about Zigazoo, and it's that, right there at the bottom of all of their pages, is in innocuous link entitled "Zigazoo Challenges." Since this is the kind of place that proclaims that it has challenges for people to do, and prizes to win (in addition to the affiliate program material already discussed) as a way of getting content recorded on the app, I find it not particularly good that the challenge list appears to instead be an alphabetically-arranged list of titles of videos uploaded to the service, in their various buckets in the site map. There's the username and their bio underneath the video link. And some of those bios have personally-identifiable information in them, like age, name, birthday, and the like. (Whether they're real or not is something that would have to be investigated, but still...) Many of them also mention other sites where they might be found, like YouTube. Because why not?
I can't directly access their user pages from the website with casual effort, but I may have gathered the structure of the URLs anyway, because trying one method gave me a blank page with the "get the app in the app stores" buttons and another method gave me a 404 error, so I'm guessing the structure was right, but there is at least some protection in place to prevent an unauthorized connection from collecting profile data. Someone with more time, patience, and/or malice than me might try to see if that protection could be broken, and how trivially it would need to be. Or, possibly, if they were signed in to the app, the webpage would display just fine for them, because then it became an authorized connection. There's a lot here just for me, randomly poking through the possible challenge prompts. There may be even more that's not related to the challenges, but still, that's a lot of videos available on the drive-by for anyone to see. Better hope those moderators were on target, right, and they were able to scrub anything that might have been identifying from all the videos that have been submitted so far, and that there's not anything in someone's bio and videos together that would make identification possible, right?
I freely admit, the amount of social media time that I think is appropriate for children until they are sufficiently understanding of what they might encounter online and what they should do if they hit the parts of online that are not well-behaved is none. And that "none" includes grownups putting them on social media. Anything relayed online about that kid should be done with permission and understanding. If the kid cannot understand or give permission, or doesn't, then it doesn't go there. So I am inclined to find the concept of Zigazoo itself terrible and ask what the people who created it were thinking they were going to do. Child-safe Tiktok? If that's what they were aiming for, there's still some work to go, I think, in the "Child-safe" part of it. I suspect the actual answer of what Zigazoo is for is the advertising dollars being spent to entice kids and grownups and the opportunity for companies to sponsor children with their products. And that is very much not child-safe, even if it is COPPA-compliant.
no subject
Date: 2024-03-21 05:25 am (UTC)- keep children/their photos safe from sexual predators
- keep children safe from estranged non custodial parents/grandparents
(altho obviously both of these are IMMENSELY important)
it's also
"don't push energy drink advertisements, gambling advertisements, alcohol advertisements, religious advertisements on people who are underage"
no subject
Date: 2024-03-21 06:29 am (UTC)no subject
Date: 2024-03-21 05:57 am (UTC)"Hi! You must be Stephve! I'm Tonheigh's mom, you know, from Ziggy? Tonheigh is so excited to meet you, we are just in town for the afternoon, can we give you a ride home? Tonheigh's at the library because I wanted this to be a surprise."
no subject
Date: 2024-03-21 06:32 am (UTC)no subject
Date: 2024-03-21 09:19 am (UTC)Video hosting is expensive, but not as expensive as skilled work, on a massive scale, by people who'll pass a police check to work around children, and who will all need to be frequently replaced because it's doing them too much psychological damage to continue (permanently removing them from the potential hire pool from which every other social media company is also hiring moderators and/or trust and safety employees.)
no subject
Date: 2024-03-21 02:22 pm (UTC)no subject
Date: 2024-03-21 04:18 pm (UTC)(n.b. for others who don't know my history, I did text pre-screening and user interaction moderation work for a few years; I had the opportunity to apply for user-reported Bad Stuff moderation work but I knew I was not suited; I still do some spam work from time to time.)
no subject
Date: 2024-03-21 03:56 pm (UTC)no subject
Date: 2024-03-21 05:20 pm (UTC)no subject
Date: 2024-03-21 04:00 pm (UTC)Also, the infosec fails common to new social media sites (new sites of every stripe, tbh) have the potential for some very serious real-world harms.
no subject
Date: 2024-03-21 05:22 pm (UTC)