So I entered another hackathon through DevPost. This one was for Terminal >Hackathon Tech Takes On Mental Health. We had to come up with something for mental health.
I paired up with with someone from the previous hackathon and I believe two others joined in. With the short turnaround I was the only developer on this one, but everyone chipped in other ways.
We would up making Tell Me Something Good. It’s a basic page that allows a user to input how they are feeling. It then sends that submission to the Google Cloud Natural Language REST API. The API takes the text submitted and analyzes it’s sentiment, returning a score. A 1 or it’s very positive, a -1 if it’s extremely negative, with increments in between.
It’s actually a pretty slick API. We threw all sorts of sentences and paragraphs at it, and the ML (machine learning) really does an amazing job of giving accurate results.
An example of the project can be found here on Glitch:
This one was very fun too. It took me a little bit to the the authorization going with the REST API, but once I got it going it was very flexible and easy to work with. I’m starting to get some very creative uses of sentiment analysis.
I signed up pretty early on, but was having a really hard time finding teammates. I asked some friends and some strangers with no luck. Just as I thought oh well, it’s a no go for me I got a request to join a team.
My team was great, with people located all over the country. Many in school and some with full-time jobs. It was a solid mix.
We ultimately developed PrivIQ. It is an Edge Extension that helps simplify if a site collects private data on a visitor.
What really impressed me most was how fast the Text Analytics analyzed and returned the response. I was sending over 5000 characters and it was milliseconds in its return. I wasn’t expecting such an immediate response.
So once we got our response of key words, we then compared them to our list of privacy terms we were looking for. Depending on how many matches were returned we either alerted the visitor through the Extension that yes your data is being collected, or that it may be being collected.
Other team members worked on the design of the extension, the web scraper that would called the privacy text from a site and putting it all together in an Edge Extension.
Unfortunately we had a hard time getting our scraper to work in the Extension. There were some CORS issues (which did make sense), and we attempted to called an outside server side script like and API at the last moment, but alas ran out of time.
I would like to continue to learn how to better write an Azure Function. That’s the route I would have like to of taken.
We got our almost fully operational example submitted just a few minutes ago, and am really glad I participated. It really was a lot of fun and forced me to learn some new/better ways of coding.
I could still use some work on Promises and Async/Await, but the videos on those topics were extremely helpful too. While I can still use them, I can’t 100% say that I fully understand what I’m doing when I use them and wouldn’t want to have to explain how they work to others myself just yet. Ha, someday I’ll have them mastered.
While 2020 certainly hasn’t been the finest of years thus far, there have been some bright spots to the year. I’ve found that the my amount of online learning has gone up with so many of my other activities having gone down.
This year I’ve gone full speed into Hacktoberfest, and have already submitted my 4 pull requests! ok, 2 of them were rather simple pull requests, but 4 nonetheless. Hopefully I will get a t-shirt when it’s all said and done.
I believe my finest pull request went towards confetti.js. A pretty slick little script o display confetti on a page. I added the ability to customize the colors used in the confetti, and sent a Pull Request in. Hopefully it’s something they can incorporate.
Hacktoberfest does a really great job of going over the basics of using GitHub. While I do use GitHub often, there’s always something new to learn or a better way to do things using it.
While I typically keep myself rather busy, Hacktoberfest really just gets me excited to contribute to open source projects. A great event and looking forward to contributing to more projects, and hopefully score a free t-shirt!
This was my first course on Codecademy, and I have to say I was really impressed. The interface was great, very intuitive with the concepts chunked up into an easy to follow way.
I didn’t try the Projects and Quizzes as they’re part of the Pro membership (which I’m not seriously considering giving a shot), but everything else was pretty great. I’d say I learned a couple of new tricks and am a lot more comfortable using the newer ES6 syntax in my projects now.
I’d highly recommend Codecademy to those looking to learn to code or to improve their coding skills.
There’s a lot going on under the hood with this one, and we’ve utilized what I think are some pretty cool techniques to make it not only easy to use but also easy for our content managers to keep up to date.
Our CMS (content management system) is OU Campus which specializes in higher ed. They’re great for us. Their templating system relies on XML/XSLT (not always so great), so ultimately a content manager typically is editing an XML file on the staging server that is then transformed into an HTML file that live on the production server for visitors to see.
So to make editing as seamless as possible, we setup a table is OU Campus that our content manager can edit, just like any regular page. Just what they’re comfortable doing day in and day out. We then leverage some XSLT magic (there are parts of it that I still believe to be magic even though I wrote it) to transform the content managers edits. Here’s what’s happening when they edit:
Content manager makes normal edits to a html table with columns, rows, etc. Each degree is a new row in the table.
On publish the XSLT transforms the edited XML into 2 files. A standard HTML page of the table transform and grouped by degree type.
This is the most basic but functional version of the page. It’s served up like this to meet any visitors who may be visiting with the most ancient of browsers. They will still at minimum get a fully functioning list of linked degrees.
The table is also transformed via another XSLT file to a JSON file. This JSON file will serve as our data source of areas of study. The external JSON file makes it easier to use the data in other pages/applications too!
So now we have 2 files. The HTML page which is fully functional, yet a tad boring and a JSON file just waiting to be used.
We didn’t use the vue-cli, just some vanilla Vue for this one. If we do a 2.0 we’ll probably go with vue-cli as we have a much better understanding of it now and it’s benefits.
4. The Vue script not only filters by title, school, etc. but we also include a field for tags. This really opens the door for us to make sure that our degrees can be found not only on their proper name, but perhaps a name that is used at other institutions, or even if searched as a career driven term. This should be a big upgrade for all.
5. To top it off we collect Events in Google Analytics of terms searched on. This data will help us make sure that the tags being used make sense and if we should consider adding new tags or even renaming or launching new degrees in the future.
All in all a very fun project that really helps to modernize a highly used page on our site. We’re looking to incorporate the JSON data into other apps/uses as well.