Christy Warden (Social Media)

From edegan.com
Revision as of 17:43, 25 October 2016 by ChristyW (talk | contribs)
Jump to navigation Jump to search

09/27/16

Talked with Ramee about what kind of content the twitter account seeks to retweet/take links from. Issues she has with HootSuite: - Content in the feed is not relevant, often from illegitimate sources (random people's tweets that happen to contain the word entrepreneurship)

Goals for HOOTSUITE: - improve filters to grab tweets with legitimate content - innovation/research good from most fields, specifically life sciences/ health - preferred from around Houston/ San Francisco/ Boston

THINGS I DID for HOOTSUITE: - add the filter:link to the HootSuite feeds to only include tweets which link to external sources (hopefully increases legitimate tweets) - add geolocation (Houston) to the innovation feed to decrease scope of search - added "patent" search to entrepreneurship field and "research" to innovation - required both feeds to filter for tweets containing links


ANTICIPATING IMPORTANT TWEETS/ BLOGPOSTS BRAINSTORMING: - example given by Dr. Egan: We could have created a blogpost linking all of the Channels that the debates would be shown on - looking for a calendar to anticipate http://www.zerohedge.com/news/2015-12-31/whats-ahead-2016-key-events-next-12-months ?? - Potentially have people write blogposts with searchable terms/tags just before events ("10 Nobel Prize Innovators Who blah blah blah" right before the announcement of Nobel Prizes this October)


FINDING PEOPLE WHO FOLLOW PEOPLE LIKE US: I am reading about this guy's crawler https://github.com/bianjiang/tweetf0rm which appears to do this. I will continue looking at it on Thursday.


09/29/16

MAKING NOTE OF OUR EXISTING CRAWLERS

Existing Crawlers

Spent a significant amount of time with Harsh tying to figure out how to get the existing twitter crawler to work and download its output file to a place we can access.

HERE IS THE PLAN Making plans for how to use twitter crawler to find relevant people to follow: I am changing the twitter crawler so that it will do this:

WE PLUG IN: The twitter handle of a person we think posts content similar to ours or whose followers are likely to overlap with people interested in us What the crawler will do: Crawl their tweets and make a count for each tweet for how many entrepreneur buzzwords we find. Take the top scoring tweet and crawl the followers of that tweet. Rank the RTs by selection criteria which I haven't totally decided yet but might include: - How many of their tweets contain buzzwords - their follower/following ration - how active they are

OUTPUT: a list of twitter handles of people who are similar to us/ like our kind of content/ are likely to follow back and interact with us.



10/04/16

I spent the majority of today building a function which takes as an input a username and returns a list of people who use our buzzwords and who we should potentially follow. The function is almost done and I estimate I can finish it by Thursday.

For about an hour and a half, I compiled a datasheet of trump's twitter activity since his nomination. I emailed this file to Ed and Anne.


10/06/16

Christy Warden (Twitter Crawler Application 1)

10/18/16

The first crawler is complete! Returns an excel file of ranked retweeters of relevant tweets. I tested this on a bunch of users and am getting results that I think are good. The next step is to talk to someone about what exactly we want to do with this information. One issue is when someone with tweets that are good have no retweeters. Makes it difficult to get any information out of their page. Again, this is located in my RDP page at Documents/My Projects/Twitter Crawler/

10/20/16

Today I created a plan to experiment with the crawler which is explained on Christy Warden (Twitter Crawler Application 1) I researched and followed accounts recommended by the crawler and plan to check back on Tuesday to see if they follow back. After I do this for a few times, I will be able to see how to adjust my criteria for choosing someone to follow and plan on automating the system. The end goal would be for me to run a large program every time I come to work, which unfollows people we followed who didn't follow back, follows new people based on the algorithms that I am testing. I am spending time today figuring out how to automate the follow/unfollow process so that this can be achieved quickly once I get some results from this initial follow spree.

10/25/16

Today I came back to discover that only 2 people in approximately 30 followed us back after my last week follow spree. I wrote a program which unfollows all the people who didn't follow us back so that we don't wrack up huge numbers of following. I am considering that maybe it would be better to target people who don't have a high number of followers or whose follower/following ratio is very low. I incorporated these components into my algorithm, but I am not certain that I have discovered the optimal balance for the total score of the potential follower. I used a tactic which incorporated this score concept this week. Additionally, I automated this process so that people who achieve a threshold score are automatically followed by the program, which significantly improved my efficiency in following. Because of this, I was able to follow around 70 people this week which should provide us with more data for Thursday when I check the result of this experiment. I plan on asking some of the Stat/ math interns for help with calculating the significance of the scores so that I can figure out which score make-up best correlates with the probability of a follow back.

Something interesting that I noticed was that when I was following large numbers of people today, we gained about 4 followers. I assume those accounts are also operating on a crawler of some kind and noticed us following mutual accounts? I wonder if we could explore this as a strategy in and of itself, gaining followers by following huge batches of people that are tracked by other crawlers? I am not sure this would be efficient, however, because I am assuming those crawlers also unfollow people who fail to respond to them which defeats the purpose.

Another thought, the only way that we will ever break out of the "follow someone and hope for a follow" 1:1 ratio is if the followers that we are gaining are people who will retweet us and interact with our content. That way they will garner attention for us in their own audience of followers and we will gain unrequited followers. So even though we only gained 2 followers out of the 30 I followed last time, they actually seemed like quality accounts and one of them even retweeted us. I think the long process of seeking accounts carefully rather than just following mass numbers of people will ultimately build an active follower base. Thus, we won't become one of those accounts with like 73k followers that get 1 favorite and 0 retweets on the vast majority of their content.

We need to start considering automated interactions with the accounts that follow us or that we follow (like maybe auto-favoriting a few of their tweets or having someone draft a DM for the people we are trying to win over?). I am definitely not the best person to come up with a framework for this, however so I would need to talk to Ramee/ Anne/ any of the social sciences interns about some possible approaches.