This thread got long, so here is a perhaps more easily read copy of it:
One thing that came up on #InLieuOfFun that I didn't get the chance to answer was @klonick asking about whether the earlyish content moderation was based on "First Amendment Norms." I think the answer to that is a bit more complicated than it may seem.
1/
Am speaking from my experience at Google (outside counsel 2000-3, inside 2003-9) and Twitter (2009-13). Others may have used different approaches.
2/
By "First Amendment Norms" I take @Klonick to mean that the platforms were thinking about what a govt might be OK banning under 1st Am jurisprudence in the US.
Of course, the platforms aren't govt & 1st Am doesn't speak to what govts ban, only what they cannot. But still...
3.1/
To restate, "1st Am Norms" might be something like platforms ~only~ removing what was removable under US 1st Am jurisprudence ~and~ had been generally made illegal in the US (or elsewhere if doing geo-removals), irrespective of 47 USC 230.
3.2/
First, lots of content removal was simply not cognizable under 1st Am analysis. Spam was a significant issue for Google's various products & Twitter. I don't know of a jurisdiction where spam is illegal & it is unclear whether a govt banning it would survive 1st Am.
4.1/
Nevertheless, spam removal (both by hand and automated) was/is extremely important and was done on the basis of improving user experience / usefulness of the products.
4.2/
Similarly, nudity & porn were sometimes banned for similar reasons. Some types of products (video) might be overrun by porn and be unwelcome for other uses / users if porn was not discouraged through removal, especially early. And yet, the 1st Am is quite porn-friendly.
4.3/
There were also some places that might look like they fit 1st Am norms but were really the platforms deferring to courts. For example, a court order for the removal of defamation would result in removal (irrespective of §230 immunity).
5.1/
You can square that w/ 1st Am norms but the analysis was not based on what types of defamation or other causes of action the 1st Am would allow, but rather deferring to courts of competent jurisdiction in democracyish places.* <- this last bit was complicated + inexact.
5.2/
Where we refused, it was often about fairness, justice, human rights, or jurisdictional distance from the service, not the 1st Am per se.
5.3/
All of that said, I do think there were times when we look to the 1st Am (and freedom of expression exceptions more generally) to try to grapple with what the right policy was for each product.
6.1/
For example, understanding what types of threats we would remove from Blogger, we used US precedent to guide our rules. My memory is hazy as to why, but I believe it stemmed from two factors: (a) that we felt that we were relatively new to analyzing this stuff but that
6.2/
the Courts had more experience drawing those lines, and (b) that the Courts and Congress, being part of a functioning democracy, might reflect the general will of the people. These were overly simplistic ideas but that's my memory.
6.3/
In summary: while I think there is something to the idea that 1st Am norms were important, I think the bigger impetus was trying to effectively build the products for our then users -- to have the product do the job the user wanted -- within legal/ethical constraints. But...
7.1/
But, we did all of that from a particular set of perspectives (and that's what the 1st Am norms are probably part of) that was nowhere near diverse enough given the eventual reach and importance of our products.
7.2/
I'd love the read of others doing or observing this work at the time on whether I'm misremembering/misstating @nicolewong @goldman @delbius @jilliancyork @adelin @rmack @mattcutts @clean_freak @helloyouths @dswillner +many more + those who aren't on Twitter… (please tag more)
8/
And, in case you want to see the question I'm referring to, from @Klonick on #InLieuOfFun look here at minute 22:11 (though the whole conversation was good):
http://youtu.be/oYRMd-X77w0?t=1331
9/9
First Amendment and Earlyish Content Moderation
Posted by
A M
on
5/07/2023
0
comments
Links to this post
[
Labels:
expression,
law
]
可伍看国外网站的手机浏览器
This post is co-authored by Nicole Wong and I.
We both set to work trying to figure out how to help Googlers launch successful products that were legal (at least in the countries where we operated). We each had some experience with this as outside counsel, and we were both pretty unsatisfied with the typical model of legal review for products.
That model was taken from big companies which historically treated legal review like part of an assembly line (towards the end). The product teams would develop products and then check in with a line of legal subject matter experts for sign-off before launch. For example, a product that matched people to their perfect pet might get designed, written, tested, and be ready to launch when it was then taken for review by a commercial lawyer for the terms of service, an intellectual property lawyer for trademark and copyright clearance, a patent lawyer in case anything new had been invented, a regulatory attorney for regulatory compliance (sometimes including privacy), and maybe an export control lawyer and a similar set of experts in the countries where the product was launching. Law firms are typically departmentalized in similar ways, aligning along legal subject matter specialization, and consequently smaller companies who don’t have in-house counsel often need to hire multiple specialized lawyers.
There are four major problems with this process:
- legal approval in each area is binary and too late: by the time the product is built, there is a large amount of pressure to launch with little ability to make more than cosmetic changes to the product;
- 纳豆微皮恩:2021-11-15 · 您的位置: 首页 > 软件 > 实用工具 > 纳豆微皮恩 点击下载 纳豆微皮恩 9.2分 下载次数:3076 更新时间:2021-11-15 详细信息 大小:18.6MB 系统:安卓 分类: 实用工具 ...
- legal would understand the law but not necessarily the product: dividing up legal counsel by area of legal specialization means that each lawyer has a depth in law and a breadth in products.
- legal becomes “them” versus the product team’s “us”: last minute binary review by people who don’t know the product or the product team unnecessarily forces misalignment between the team trying to get something done for the users and the business, and the lawyers. That misalignment can result in all sorts of bad, from simple misunderstandings to adversarial behavior.
This approach is not without downsides. Perhaps the biggest is that product depth can come at the expense of legal depth, which meant that we sometimes incurred costs working with outside counsel and experts in legal areas and countries outside of our expertise or missed legal issues. However, we remain convinced that the vast majority of significant mistakes in-house departments make in our industry are the result of not understanding the product rather than not understanding the law. Another downside is that while being part of the “us” of a team is satisfying, can result in a much better understanding of a product, and better teamwork in identifying and fixing problems, it can also mean you are in the team “groupthink” as opposed to removed from it. Careful attention must be paid to all of the ways to reduce groupthink and it is imperative that you actively seek input from folks outside the bubble if you are going to effectively understand the various impacts your product decisions are likely to have in the world. We found it really helpful to discuss product features with advocacy organizations and they frequently improved the products. But, there were also definitely times we screwed up.
The actual role of “product counsel” grew out of the fact that our previous job descriptions didn’t make much sense given how we were doing our jobs. So we started thinking through names. Originally, we liked “launch counsel” because it was active, aligned with what our teams were trying to do, and could describe a bunch of different areas of law. Eventually we settled on “product counsel” because it was even more descriptive of the alignment we hoped for, and was tied to the whole lifecycle of a product from idea generation through maintenance and refinement, not just launch.
Our first job posting was in February 2004. It read:
Google is looking for experienced and entrepreneurial attorneys to develop and implement legal policies and approaches for new and existing products. The Product Counsel will be responsible for a portfolio of Google products across many legal subject areas including privacy, security, content regulation, consumer protection and intellectual property. Indeed, the only product legal matters with which this position will not be deeply involved are those that are strictly patent or transactional in nature, which are handled by other existing Google lawyers.
Requirements:
Passion for and deep understanding of internet.
Very strong academic credentials.
Solid understanding of Internet architecture and operation.
Ability to respond to questions/issues spontaneously.
Demonstrated ability to manage multiple matters in a time-sensitive environment.
Strong interpersonal and team skills.
Excellent interpersonal skills, dynamic and highly team-oriented.
Flexibility and willingness to work on a broad variety of legal matters.
Superior English language writing and oral communication skills.
Sense of humor and commitment to professionalism and collegiality are required.
California Bar
=======================
Note the many mistakes in that posting. For example, the Internet is referred to with both lower-case and upper-case capitalization (back then I was incorrectly not capitalizing it). Ug.
Even so, we were very fortunate to recruit an amazing set of folks at Google to become the first Product Counsel. Some of the originals who defined the role were: Glenn Brown, Trevor Callaghan, Halimah DeLaine, Brian Downing, Gitanjli Duggal, William Farris, Mia Garlick, Milana Homsi, Susan Infantino, Daphne Keller, Lance Kavanaugh, Courtney Power, Nikhil Shanbhag, Tu Tsao, and Mike Yang (in alphabetical order). The team was eventually about forty-strong by the time we left and worked across many countries. The idea of it spread relatively quickly in the industry and now 广东雀神麻将手机版下载|广东雀神麻将游戏APP手机版 ...:2021-11-22 · 广东雀神麻将APP手机版是一款非常有意思的棋牌,而且游戏里面有着龙争虎斗的,各种玩法的斗争太激烈了,还有完善的规则出现,能够去保证玩家的对局不会出现任何的问题才会去放心的对局。 唐三棋牌官网版游戏特点 1.顶级的游戏画面和逼真的场景设计,带来一个不同的棋牌世界。.
Product Counsel, particularly when we were still doing it and not just managing people doing it, was one of the best jobs we have ever had.
Posted by
A M
on
4/20/2023
1 comments
安卓微皮恩安装包
[
Labels:
google,
law,
practice
]
可伍看国外网站的手机浏览器
Update: 安卓微皮恩免费 has a good thread on a number of South Dakota Tribes' COVID-19 funds. She points to South Dakota because of the lack of a shelter-in-place order there.
Refugees and Displaced People
EIN: 80-3405530
Update: A friend who knows more than I do about refugee issues points to the following two orgs:
International Rescue Committee, Signpost Project [donate]
EIN: 13-5660870
火锅视频破解版软件下载-安卓版火锅视频破解版app免费下载 ...:今天 · 《火锅视频破解版》这是一款破解往后的火锅视频,在软件中具有丰厚的视频内容,精彩的内容,只需要查找就能够找到!这儿有各种平常看不到的各种视频,有你喜爱的各种风趣玩法喔!火锅视频破解版官方介绍火锅视频是海
Refugee Advocacy Lab at 安卓微皮恩破解 + International Refugee Assistance Project (IRAP)[donate: Refugees International or IRAP]
EINs: 52-1224516 (Refugees International) or 82-2167556 (IRAP)
Matching refugees with healthcare experiences with states that need healthcare workers and the certifications they need to practice, thereby helping both.
- The Catholic Bishop’s Fund [donate]
- Seguimos Adelante [安卓微皮恩安装包]
- Abara [安卓微皮恩破解]
Update: The Reform Alliance has a special COVID-19 action page to attempt to get governmental attention to this problem. Thanks @rklau.
Domestic Abuse
Children
Miscellaneous and Support
Other Good Writeups & Resources
Update: Isaac Chotiner, The Danger of COVID-19 for Refugees, April 10, 2023.
Q&A with David Miliband, the president and C.E.O. of the International Rescue Committee about the specific issues raised by COVID-19 in refugee communities where he highlights disinformation as an important issue.
Posted by
A M
on
4/16/2023
1 comments
Links to this post
[
]
可伍看国外网站的手机浏览器
10 years ago today, Twitter 安卓微皮恩破解 “native retweet” and significantly changed how people experienced the Twitter timeline. IMHO it was a huge and relatively gutsy change. I’m writing this post to explain what changed, and why I like a particularly controversial aspect of it -- 安卓微皮恩破解 -- so much. I hope it will encourage others who were at Twitter at the time to share their stories.
First off, imagine the Twitter of early 2009. It was a simpler Twitter in SOOOOO many ways. Timelines were a reverse chronological set of 140 character tweets. There were no ads. No images. And no mobile phone client from Twitter. Barack Obama had just been inaugurated.
![]() |
Late 2008 screenshot from Huynh, Terence, Twitter releases new design, more customisable, TechGeek, Sept 20, 2008 |
On August 13, 2009 @biz made a short blog post pre-announcing a new retweet feature so that the Twitter client developers (none of whom worked at the company or were paid by Twitter) would be ready for it when it rolled out on twitter.com. At least one of those developers already had a retweet button that made retweeting easier, but the new feature @biz announced was different. It was simple and revolutionary. Now when I retweeted @BarackObama my followers would see his tweet as if they too were followers of @BarackObama for that instant. They would see his Tweet as if they followed him -- in their regular timeline -- but with an acknowledgement to me, and as if it had been tweeted when I hit the retweet button. As @Biz described it:
"Let’s say you follow @jessverr, @biz (that’s me), and @gregpass but you don’t follow @ev. However, I do follow @ev and the birth of his baby boy was so momentous that I retweeted it to all my followers.
![](http://iizhf.dechy.net/content/dam/blog-twitter/archive/project_retweet_phaseone95.thumb.1280.1280.png)
Imagine that my simple sketch is your Twitter timeline. You’d see @ev’s tweet even though you don’t follow him because you follow me and I really wanted you to have the information that I have." Photo and quotation from Stone, Biz, Project Retweet: Phase One, Twitter Blog, Aug. 13, 2009.
It made retweeting much easier,* but it also meant that users saw the faces of people they didn’t follow in their timeline (internally we called this the "strangers in the timeline" phenomenon). Retweets were also displayed based on the time of retweet, not the time of the original tweet (even though the timestamp was still the original one), so it looked like the tweets were being displayed out of order. Here’s what it looked like when it rolled out later that year (with a special dialog box to explain to people why they were seeing strange new avatars).
![]() |
Screenshot by See-ming Lee CC-BY-SA |
Anyhow, I’ll leave the rest of the stories about this to others who were closer to the decision and implementation. For now I just want to thank the folks that were there and call them out so that they (hopefully) tell more of the story. My memory is hazy, but I think at least @zhanna, @alissa, @cw, @goldman, @ev, and @biz would have good memories of it. Please share, tell the inside story, and add more folks I missed.
Posted by
A M
on
8/13/2023
0
comments
Links to this post
[
Labels:
twitter
]
Advice for new General Counsels (GCs)
![]() |
Arabella Mansfield, 1870. Public Domain image from Wikipedia. |
![]() |
Interior of H.A. Goodrich & Co.'s Store. Public Domain illustration from Fitchburg, Massachusetts Past and Present via Internet Archive. |
![]() |
Crocker Block. Public Domain illustration from Fitchburg, Massachusetts Past and Present via Internet Archive. |
![]() |
Thurgood Marshall, 1976. Public Domain image from the Library of Congress via Wikipedia. |
![]() |
Clara Shortridge Foltz. Public Domain image from History of the Bench and Bar of of California via Internet Archive. |
![]() | |
|
Posted by
A M
on
2/04/2023
安卓微皮恩免费
Links to this post
[
Labels:
law,
practice,
twitter
]
Recent Podcasts & Articles on Content Moderation
One of the great things happening now is that more and more attention is being focused at one of my favorite subjects: content moderation by internet platforms. It's an important subject because a large amount of online speaking and listening happens through platforms. There has been a ton of good writing about this over many, many, years but I want to focus on four relatively recent bits here.
Radiolab, Post No Evil, Aug 17, 2018
Radiolab tells a sweeping story of the development of Facebook's content removal policies, deftly switching perspectives from people protesting its former policy against breastfeeding, to the headquarters workers developing policy and dealing with high-profile controversies, to the offshore contractors on the front line evaluating thousands pieces of disturbing content every day.
Post No Evil is a great introduction to the issues in this space but I think its most insightful moment is relatively buried. At 1:02, this exchange happens:
- Simon Adler: What I think this [controversy around a beheading video] shows is that Facebook has become too many different things at the same time. So Facebook is now sort of a playground, it's also an R-rated movie theater, and now it's the front page of a newspaper.
Jad Abumrad (?): Yeah, it's all those things at the same time.
Simon Adler: It's all those things at the same time and what we, the users, are demanding of them is that they create a set of policies that are just. And the reality is justice means a very different thing in each one of these settings.
Think of the content policies you might want at a library versus a dinner party. When I go to a library, it is very important to me that they have books about the tiny niche of the world that I am interested in at that moment. For example, books on bias in machine learning or Italian Amaros. It doesn't really bother me if they have books on things I don't care as much about, like American football. For books that I disagree with, such as To Save America, or think are evil, such as Mein Kampf, I may question the curators' choices but I expect breadth, and the inclusion of those books is less bad than if the books I cared about were not included.*
Change to the dinner party context and my preferences are reversed. Dinner parties that don't hit on bias in machine learning are fine by me but if I was at a dinner party where someone couldn't shut up about American football, I would not call it a success. A dinner party where a guest was espousing the views of Mein Kamfp would be one I would cause a scene at and leave. Over-inclusion is a huge problem and outweighs inclusion of my specific niche interests.
I've never been a big Facebook user, but it used to remind me of a dinner party. I thought that's what it was going for with its various content policies. Now, as Simon Adler says, it is trying to be many things (perhaps everything?) to many people (perhaps everyone?) and that is really hard (perhaps impossible?). It also has made the decision that some of the types of moderation that other platforms have used to deal with those problems (blocking by geography, content markings for age, etc.**) don't work well for it's goals. As Radiolab concludes starting at 1:08:
- Robert Krulwich (?): Where does that leave you feeling? Does this leave you feeling that this is just, that at the end this is just undoable?
Simon Adler: I think [Facebook] will inevitably fail, but they have to try and I think we should all be rooting for them.
Professor Klonick does an excellent job of describing why platforms may want to moderate content, how they do it, and the legal framework and regulatory framework that underpins it all. This is a very large expanse of ground, covered extremely well.*** If you are new to this area and want an in depth briefing, I highly recommend The New Governors. Her proscriptions are to push platforms towards greater transparency in their content moderation decision making and policies, as well as greater accountability to users. As in Post No Evil (for which she was a source), Professor Klonick identifies the popular concern about platform policies and locates it as a mismatch between platform policies and user expectations.
Professor Klonick also draws out the similarities and differences between content moderation and judicial decision-making. She writes:
- Beyond borrowing from the law substantively, the [Facebook content moderation rule documents called] the Abuse Standards borrow from the way the law is applied, providing examples and analogies to help moderators apply the rules. Analogical legal reasoning, the method whereby judges reach decisions by reasoning through analogy between cases, is a foundation of legal theory. Though the use of example and analogy plays a central role throughout the Abuse Standards, the combination of legal rule and example in content moderation seems to contain elements of both rule-based legal reasoning and analogical legal reasoning. For example, after stating the rules for assessing credibility, the Abuse Standards give a series of examples of instances that establish credible or noncredible threats. “I’m going to stab (method) Lisa H. (target) at the frat party (place),” states Abuse Standards 6.2, demonstrating a type of credible threat that should be escalated. “I’m going to blow up the planet on new year’s eve this year” is given as an example of a noncredible threat. Thus, content moderators are not expected to reason directly from prior content decisions as in common law — but the public policies, internal rules, examples, and analogies they are given in their rulebook are informed by past assessments.
Ellen Pao, Let's Stop Pretending Facebook and Twitter's CEOs Can't Fix This Mess, Wired, Aug 28, 2018; and Kara Swisher and Ron Wyden, Full Q&A: Senator Ron Wyden on Recode Decode, Recode Decode, Aug 22, 2018
I include these two as good examples of the current mood. Both Ms. Pao and Senator Wyden are friends of tech and highly tech knowledgeable. Ms. Pao was the CEO of Reddit. Senator Wyden was one of the authors of the original statute that encouraged content moderation by protecting platforms that moderate content from many types of liability. Nevertheless, Ms. Pao believes that the tech CEO's don't care about and aren't trying to solve the issue of bad speech on their platforms. She calls for legal liability for falsity and harassment on platforms.
- If you’re a CEO and someone dies because of harassment or false information on your platform—even if your platform isn’t alone in the harassment—your company should face some consequences. That could mean civil or criminal court proceedings, depending on the circumstances. Or it could mean advertisers take a stand, or your business takes a hit.
- ... lay[s] out what the consequences are when somebody who is a bad actor, somebody who really doesn’t meet the decency principles that reflect our values, if that bad actor blows by the bounds of common decency, I think you gotta have a way to make sure that stuff is taken down.
- ... I don't know of many good examples outside of heavily editorial ones with a relatively small set of content producers, that have been able to be both extremely inclusive and progressive towards what I think are the "right" kind of marginalized ideas while keeping out the ones that I think are marginalized for very good reason. ...
Many of the larger Internet platforms are trying, with varying degrees of success and failure, to do this right, as I was when I worked at Google and Twitter. That said, I don't have a great example of a platform or community that is working exactly as I would like. And it seems like that is a big and worthy challenge.
Nevertheless, it is important to understand that this is where public opinion is headed and these two pieces are a good indication.
Finally,
If you want to find out more about content moderation, here's a twitter list of content moderation folks on Twitter. If I'm missing someone, please let me know.
* This is really specific to me and your mileage may vary widely. I am a white male with lots of privilege. Take what I say about evil content with a huge grain of salt. I am relatively unthreatened by that content compared to someone who has had their life impacted by that evil. I get that some societies will want to ensure that books like Mein Kampf are not available in libraries. I don't believe that is the right way forward, but I may not be best situated to make that call.
** Facebook does use some of these tactics for advertising and Facebook Pages but, as far as I know, not for Facebook Posts or Groups.
*** Professor Klonick's description of Twitter's early content policies as non-existent is mistaken. Even early in Twitter's history the company had content policies which resulted in the removal of content, for example, for impersonation or child pornography. I think she just didn't have a good source of information for Twitter.
Posted by
A M
on
安卓微皮恩网盘
0
comments
Links to this post
[
Labels:
code,
expression,
law
]